Emi Linds

Human-Centered Creative Technologist exploring Growth, Identity, and Intelligent Innovation
Illustrated image of a car driving along a glowing forest road at night, guided by a winding yellow path with overlaid math formula for multi-objective loss. The text reads: “How Machines Know They Are Off Course, and Why It Matters.”

How machines know they’re off course, and why it matters

This piece is part of The Human Margin, a series of letters and reflections on the humanity in work, growth, and meaning.

Picture of Emi Linds

Emi Linds

What’s your AI optimizing for, and how does it know it’s off course? A guide for strategic leaders building systems that learn.

Estimated reading time: 5 minutes

Originally shared on LinkedIn as part of The Human Margin for AI – Read the conversation here

If you’re committing resources in AI, internally or through vendors, you’ll have to trust in how the system learns to improve over time.

A lot of that learning depends on a key question:

“How does the system know it’s off course?”

The answer lies in a mathematical mechanism called the loss function. If it’s misaligned, meaning if the system is being optimized for the wrong things, AI will take its mistakes, and amplifies them at scale.

This field note clarifies:

  • what is a loss function,
  • why defining what “off course” means in your system is important, and
  • the questions you should ask

What is a loss function?

A loss function tells the AI: “You missed the target, here’s how badly.”

It measures the margin (“loss”) between the system’s prediction and the actual result, and feeds that information back to guide improvement.

The loss function is the math the model uses to correct itself. It’s what your system thinks you care about.

Important note: This measurement isn’t objective “truth”, it’s only as good as the data and priorities used to define it.

A visual: driving with GPS

Think of AI like a GPS-assisted car.

  • Data is the fuel – no data, no movement.
  • The model is the engine – turning data into decisions.
  • The loss function is your GPS – constantly telling you how far off you are from your goal.
  • The optimization algorithm is the driver – adjusting direction based on that signal.

If the GPS is outdated or misaligned, the car still drives, efficiently in the wrong direction.

Performance problems often aren’t about a broken engine, they’re about the wrong destination.

Loss ≠ truth

Correction only works if the reference point is valid.

If your training data is biased or outdated, or if your objective is poorly defined, your AI will still “learn”… it will just learn the wrong thing very well.

And many systems optimize during training using one loss function, but are judged in production on different business metrics. When those are out of sync, then uh oh, we’re driving blind.

Important note: You’re not just responsible for the engine, you own the destination and the definition of success.

Loss functions are also not all equal

Different objectives require different forms of feedback, for example:

  • In recommendation systems, do you want to maximize engagement, or prioritize different and trustworthy content? (Trade-off: click-through vs content quality)
  • In autonomous driving, should the vehicle prioritize passenger comfort or strictly minimize route time? (Trade-off: smoothness vs efficiency)
  • In a virtual fitness coach, should the AI push you to meet your weekly goal, or back off if you’ve had a stressful day? (trade-off: performance motivation vs user well-being)

These trade-offs aren’t “technical”, they’re strategic, and they’re often baked into the loss function.

Also note: Many real-world systems balance multiple objectives (e.g. accuracy + fairness + latency) by combining multiple loss terms. This is called a multi-objective loss function (as featured in the cover image), and it’s how complex goals are encoded. We can go deeper on this later.

Questions to ask the technical team

1. Ask about the GPS, not just the engine

It is often more exciting to talk about the model and the data, less so about the feedback loops and learning processes.

🔑 Ask:

“What kind of loss function are we using, and why was it chosen?”

Some teams use standard losses (like cross-entropy or mean squared error). Others may create custom or hybrid losses that address more domain-specific priorities (e.g., clinical safety, ranking quality, etc.).

2. Define what “good enough” looks like, or the AI will

Loss functions always encode trade-offs. Even defaults make choices:

  • Speed vs. accuracy
  • False positives vs. false negatives
  • Simplicity vs. nuance
  • Sensitivity vs. stability

If you don’t set those priorities clearly, the system will chase the easiest metric, which might hurt your customers, operations, or brand.

🔑 Ask:

“Have we defined acceptable trade-offs, and do they align with our strategy, values, and risk appetite?”

3. Adapt to the changing world

Consumer behaviour and the environment changes constantly. What was “right” six months ago may now be off course or even actively harmful.

Even the best models degrade over time, both in accuracy and in alignment with your goals.

In practice, this means pairing your loss function with model monitoring and alerting systems that detect drift, unexpected outputs, or KPI mismatches.

🔑 Ask:

“Who owns loss function review and refresh cycles?”

“What alerts us when it’s time to recalibrate?”

Bonus insight: adaptive loss functions

Some cutting-edge systems can adapt how they learn. These systems use adaptive loss functions to shift priorities based on real-time feedback, changing conditions, or evolving ethical expectations.

Examples include:

  • Dynamic reweighting of training examples
  • Curriculum learning: teaching simpler concepts first
  • Reward shaping: guiding agents in complex environments

However, these aren’t “set-and-forget” solutions, they actually require clear business signals and strong governance to avoid unintended consequences.

🔑 Ask:

“Do our feedback mechanisms including the loss function still reflect current priorities, or last quarter’s assumptions?”

A parting note

In every AI system, the loss function defines:

  • What “off course” means
  • How far off the system is
  • What types of corrections are prioritized

Whether in machines or in high-level decision-making, how you define “off course” determines what gets fixed and what gets reinforced.

AI doesn’t replace human decision-making, it can’t. It magnifies it. It is a collaboration.

Your clarity defines its course, and you get to decide where it takes us next.

So… where to?

From someone who believes in your clarity


For additional reading

Akhtar, T., Rahman, A., & Ghosh, S. (2025). CALF: A conditionally adaptive loss function to mitigate class-imbalanced segmentation. ResearchGate. https://www.researchgate.net/publication/390570244_CALF_A_Conditionally_Adaptive_Loss_Function_to_Mitigate_Class-Imbalanced_Segmentation

Liu, Y., Zhang, Y., Wang, X., Liu, Z., Pan, S., & Tang, J. (2025). Gradient-based multi-objective deep learning: Algorithms, theories, applications, and beyond. arXiv. https://arxiv.org/abs/2501.10945


💬 Have a correction, insight, or challenge? I’d love to hear it – conversation is always welcome.

I’m Emi Linds, a Canadian human-centered AI strategist and creator of The Human Margin – a clarity system for responsible AI, trust architecture, and narrative-driven leadership. This series is part of my public thinking practice and personal growth and learning initiative on responsible AI design that holds at scale.


If you liked reading this, you may also like:

For a full list of my personal articles, you can follow my Medium page.

#AILeadership #StrategicThinking #LossFunctionsExplained #TrustAtScale

Share the Post:

Keep Reading