What Machines Taught Us About Being Human
A reflection on how LLMs mirror our minds — and remind us how to grow.
We like to think we’re smarter than the machines we build.
And maybe we are — for now. But something odd has been happening lately.
As I’ve spent more time training, prompting, and poking large language models, I’ve started noticing… echoes.
Not just in their outputs. In their behaviours.
In the way they learn.
In how they fail.
In how they improve.
And in the way they pretend.
It started as a metaphor.
Now I believe it’s more than that:
Our brains are wetware LLMs.
Training and the Loop
If you’ve ever trained a model — or even just used one through an API — you start to internalize a rhythm.
Feed it examples.
Check the outputs.
Reinforce the good.
Penalize the bad.
Repeat until it improves.
This isn’t just machine learning.
This is how we learn everything.
When I was younger, trying to improve my Carnatic violin playing, I’d follow the same loop.
Play the swara.
Notice the sour note.
Replay.
Adjust fingering/intonation.
Rinse/Repeat.
Eventually, the feedback loop got shorter. The ear began correcting the hand before the mind even intervened.
That’s tuning.
Or when I’m studying fiction — I don’t just read Dean Koontz or Lee Child. I type out their stories, word for word. A practice technique suggested by prolific writer Dean Wesley Smith.
Copying wasn’t plagiarism. It was pretraining.
You learn cadence by mimicry. You learn structure by absorption.
And then, one day, you surprise yourself with an output that feels original — but you know, deep down, the gradient came from somewhere.
Model Behaviours We Share
Here’s the eerie part: the more you work with LLMs, the more human they feel — not in consciousness, but in quirks.
1. Overfitting
LLMs that are fine-tuned too aggressively on narrow data start parroting it — losing flexibility.
So do we.
Ever meet someone who mastered one domain and can’t unlearn their habits when switching fields? That’s human overfitting.
2. Hallucinations
LLMs generate plausible nonsense when unsure. So do we.
In meetings. On first dates. During interviews.
Confidence is not the same as correctness — for both machines and minds.
3. Context windows
LLMs can only “see” a certain number of tokens at once.
So can we.
Ever walk into a room and forget why you went in? That’s a context window shift. Our attention span — bounded. Our memory — fallible.
But we can prime our context deliberately — by journaling, outlining, visualizing. Just like how you “prompt” a model better when you include prior examples.
4. Personas
LLMs can be given system prompts to behave a certain way: “Act like a Shakespearean actor”, “You are a helpful Linux admin”, “You’re a snarky writing coach”.
We do this too. We wear masks.
We speak differently at work than at home.
We switch from teacher mode to student mode.
We code-switch, dialect-shift, self-filter.
These personas aren’t fake.
They’re fine-tuned subsets of ourselves, optimized for task and audience.
Do Some Brains Have More Parameters?
Sometimes I wonder: if we stretch the metaphor, do people have different “parameter counts”?
Do some folks just have more neurons wired up, more memory bandwidth, more raw capacity?
Maybe.
But LLMs remind us: parameter count isn’t destiny.
It’s how you train.
What you expose yourself to.
What feedback you seek.
How often you iterate.
Even the largest models are dumb if they’ve been trained on trash.
And even a small model — carefully fine-tuned on the right data, guided with the right prompts — can outperform giants.
Same with people.
We’ve all met someone who had every advantage and squandered it.
We’ve all met someone else — less formally educated, less polished — who radiated clarity and depth because they trained deliberately.
It’s not about who has the most parameters.
It’s about who’s still in the loop.
LLM attributes as Human Metaphors
Zero-shot vs Few-shot Learning
A child touching a hot stove once? Few-shot learning.
Reading five flashcards before a quiz? Few-shot.
Encountering a new idea and making sense of it because of prior abstractions? That’s zero-shot. That’s transfer.
Prompt Injection
Ever been influenced mid-conversation and changed your tone? That’s human prompt injection.
Context hijacks our behaviour more often than we care to admit.
Temperature
High-temperature models generate more creative outputs.
People too. Under constraints, some freeze(forgive the pun!). Others improvise. Your internal “temperature” — mindset, mood, caffeine level — changes how you think.
Loss function
For models, it’s a calculated gradient.
For us, it’s regret. Embarrassment. The wince of feedback.
Pain and fear are our backpropagation signals.
Where do we excel?
But for all the parallels, there are crucial ways our brains still outclass even the largest models.
LLMs don’t want anything. They don’t have drive, curiosity, fear, embarrassment, or delight. They don’t learn unless someone forces them to. They don’t decide to improve. We do.
We seek out the loop. We care when we’re wrong. We revise because we want to get better, not because we’re re-trained on a new batch.
We remember emotionally. The sting of failure. The warmth of praise. The embarrassment of a bad take in public. That visceral encoding is something no model has.
We can self-direct. A model doesn’t wake up one day and say, “I think I need to get better at analogies.” But we do. We read something brilliant and feel inspired. We listen to a master and feel the gap. That’s not loss minimization. That’s ambition.
We generalize across domains in weird, leaky, beautiful ways. A lesson in Carnatic violin may improve our writing cadence. A novel may shape how we manage teams. We mix metaphors, break schemas, leap categories. LLMs struggle with that. They interpolate. We cross-pollinate.
We also choose our training data. We can decide what to consume, who to listen to, what to believe. We can uninstall toxic sources. Curate higher quality inputs. Reinforce the patterns we want to keep.
And unlike static models, we have agency over our fine-tuning. We can say: I don’t want to respond that way anymore. I don’t want to be that version of myself. And we can go train a better one.
A model may freeze its weights. But we don’t have to.
We’re wetware — always learning, always plastic, always in the loop.