Emi Linds

Human-Centered Creative Technologist exploring Growth, Identity, and Intelligent Innovation
Illustration of a wizard composed of mathematical equations standing atop a gradient descent curve labeled J(w, b), with glowing typography reading “Math in a Wizard’s Robe.”

AI Isn’t Magic. It’s Math in a Wizard’s Robe.

This piece is part of The Human Margin, a series of letters and reflections on the humanity in work, growth, and meaning.

A field note on what AI actually is, why it keeps getting smarter, and how systems learn - even when we don’t see it happening.

Originally shared on LinkedIn as part of The Human Margin – follow me on LinkedIn here.

If you’ve asked Siri a question, asked ChatGPT to summarize a PDF, let Google autocomplete your thoughts, or watched Netflix serve up yet another eerily on-brand recommendation, you’re not interacting with magic.

You’re interfacing with math, inference, and scale.

AI has entered the room – not as myth, but as infrastructure.

What AI Actually Is

Artificial Intelligence (AI) is not intelligence in the human sense. It’s a catch-all for systems that mimic aspects of human cognition: pattern recognition, language interpretation, decision-making.

These systems don’t “understand.”
They optimize.

At its core, AI is:

  • Math – models and probability distributions
  • Data – the training corpus (your digital exhaust)
  • Speed – that turns inference into a real-time reflex

Think of AI as a digital assistant with statistical recall at scale. It doesn’t synthesize learning. It calculates. Its power comes from scale, not sentience.

Yes, it’s math.
But it’s math in a wizard’s robe – coded in abstraction, cloaked in UX, embedded into the seams of our systems.

It feels smart because it is efficient: converging on the next best action, not through understanding, but through probability.


Why AI Keeps Getting Better

The upgrade engine is Machine Learning (ML)—a branch of AI that improves itself by watching outcomes instead of following hard-coded rules.

Three key methods:

  • Supervised Learning – give it labeled examples: “this is spam, this isn’t.”
  • Unsupervised Learning – let it find its own patterns. No labels, just vibes.
  • Reinforcement Learning – reward trial-and-error. Think self-driving cars or robotics.

But the real engine behind all of it?

STILL MATH.

There, I said it.

Gradient Descent.

Most modern models – GPTs, vision transformers, diffusion systems – learn via Stochastic Gradient Descent (SGD):

Even the most advanced neural networks still learn the old-fashioned way:
by calculating gradients, nudging weights, and minimizing loss, one batch at a time.

Basically: beneath the sci-fi interface is just a system doing calculus.

Instead of digesting the entire dataset at once (which is computationally expensive), SGD lets models learn incrementally, adjusting parameters batch-by-batch to converge toward optimal performance.

So What About Deep Learning?

Deep Learning is just Machine Learning at scale and stacked in layers.

At the center of it is the neural network, a flexible mathematical system that gets better at a task the more examples it sees – and the more layers it has to interpret them.

We can go deeper later. For now, know this: neural networks are the foundation of the AI systems shaping our world.

A visual: blindfolded hiker

Training a deep neural network is like navigating a downward mountain slope while blindfolded.

You can’t anything. You don’t know where the valley is. You only feel the slope right beneath your feet – the local feedback.

This is Stochastic Gradient Descent (SGD).

Instead of analyzing the entire mountain (the full dataset) at once, you grab a small sample of terrain around you: a mini-batch of data.

That mini-batch gives you just enough feedback to estimate the slope (the gradient) and take a step in what you hope is the right direction.

You do this over and over:

  • Feel the local slope
  • Take a step (update weights)
  • Sample a new patch of terrain (new mini-batch)
  • Repeat

Sometimes the steps are smooth. Sometimes they wobble. If your learning rate isn’t too aggressive, and your direction isn’t too noisy, you gradually descend and you’ll approach a minimum in the loss landscape.

SGD doesn’t need a perfect view. It just needs enough feedback to keep moving downward.

The model doesn’t memorize the mountain. It learns how to descend – blindfolded, guided only by small samples and smart steps.

So what now?

AI isn’t “coming.” It’s already architecting decisions – about what gets flagged, prioritized, approved, or ignored.

The real question isn’t whether you’ll use it.

It’s whether you’ll understand what it’s optimizing for – and whether that optimization aligns with what matters to you, your work, or your mission.

You don’t need to code the algorithms (you can, if you’re curious like me).
however, you may need to ask better questions of them.

Because the future isn’t a binary between carbon and code, it’s a collaboration.

And in that collaboration, humanity isn’t obsolete, far from it
it’s the operating system.

From someone who’s done the math – and still trusts you above all.

Share the Post:

Keep Reading