INTRODUCTION - Why MLOps Interviews Are Harder Than ML Interviews (and Why Most Candidates Fail Them)

Machine learning interviews used to be about models.
MLOps interviews are about trust.

That single shift explains why many strong ML engineers struggle when interviews move into CI/CD pipelines, monitoring, automation, and reliability discussions. They are technically capable, but they have never been evaluated on operational ownership.

In 2026, companies no longer hire ML engineers who can “just build models.” They hire engineers who can ship, operate, debug, and evolve ML systems in production. Every outage, silent model degradation, data pipeline break, or failed deployment has taught organizations the same lesson:

The model is rarely the problem. The system around it is.

This is why MLOps interviews feel different.

They are less forgiving.
They are less theoretical.
They are more scenario-driven.
They probe failures instead of success.

Interviewers are asking a very specific question:

“If this person owned our ML platform, would we sleep at night?”

This guide is designed around that reality.

Instead of listing tools or asking you to recite CI/CD definitions, MLOps interviews test whether you understand:

  • how ML breaks differently from software
  • why CI/CD for ML is fundamentally harder
  • how monitoring must go beyond accuracy
  • how automation can introduce as much risk as it removes
  • how to design systems that degrade safely instead of failing catastrophically

Most candidates fail because they answer MLOps questions like software engineers with ML knowledge, not like ML engineers with production accountability.

This blog teaches you how to answer MLOps interview questions the way hiring teams expect in 2026: with discipline, restraint, and systems thinking.

 

SECTION 1 - CI/CD for Machine Learning: What Interviewers Are Really Testing

CI/CD is where MLOps interviews usually begin, and where weak candidates are exposed almost immediately.

Interviewers are not testing whether you know what CI/CD stands for. They are testing whether you understand why CI/CD breaks down in ML systems, and how disciplined engineers adapt traditional DevOps practices to probabilistic, data-driven workflows.

1. Why CI/CD for ML Is Not Just “DevOps + Models”

A classic opening question:

“How would you implement CI/CD for an ML system?”

Junior candidates answer by describing:

  • GitHub Actions
  • Jenkins pipelines
  • automated tests
  • Docker builds
  • deployment steps

All technically correct.
All incomplete.

Senior MLOps candidates immediately reframe:

“CI/CD for ML is harder because the system depends on data, models, and code, all of which change independently.”

This single sentence signals maturity.

ML CI/CD must account for:

  • data versioning
  • feature drift
  • model retraining triggers
  • evaluation gates
  • non-deterministic behavior
  • backward compatibility of predictions

Interviewers are listening for that awareness.

 

2. CI in ML Interviews: What “Testing” Really Means

Interviewers often ask:

“What tests would you include in an ML CI pipeline?”

Weak answers focus only on unit tests.

Strong answers expand the testing surface:

“I’d separate code tests, data tests, and model tests. Code tests validate logic. Data tests validate schema and distributions. Model tests validate performance thresholds and stability relative to previous versions.”

This demonstrates understanding that ML failures often originate upstream of code.

Senior candidates often mention:

  • schema validation
  • null and range checks
  • distribution shift detection
  • training reproducibility checks
  • performance regression tests

Not as a checklist, but as safeguards against silent failure.

 

3. Model Promotion Is a Decision, Not a Build Step

One of the most revealing MLOps interview questions is:

“When does a model move from staging to production?”

Junior candidates answer:

“When it passes evaluation metrics.”

Senior candidates answer:

“When it passes metrics and aligns with operational constraints and risk tolerance.”

They talk about:

  • comparing against a baseline
  • statistical significance
  • cost of false positives
  • rollout strategy
  • rollback readiness

This distinction matters because deploying a model is a business decision, not a technical one.

This mindset aligns closely with the production expectations discussed in
MLOps vs. ML Engineering: What Interviewers Expect You to Know in 2025,
where CI/CD maturity is treated as a trust signal rather than a tooling skill.

 

4. Continuous Delivery in ML: Why Automation Needs Guardrails

Interviewers may ask:

“Should ML models be automatically deployed once trained?”

There is no universally correct answer, and interviewers know that.

Weak candidates answer definitively:

“Yes, automation reduces errors.”

Senior candidates qualify:

“Automation is useful, but I’d gate deployment based on risk. Low-impact models may auto-deploy. High-risk models should require human approval.”

This demonstrates:

  • awareness of blast radius
  • appreciation of business risk
  • avoidance of dogma

Automation without judgment is dangerous in ML systems.

 

5. Versioning: Code Is the Easy Part

A common follow-up:

“What needs to be versioned in ML CI/CD?”

Junior candidates say:

  • code
  • models

Senior candidates add:

  • training data
  • feature definitions
  • labels
  • evaluation datasets
  • configuration
  • environment

And then they explain why:

“Without versioning data and features, you can’t reproduce results or debug regressions.”

This shows real production experience.

 

6. CI/CD Failures Interviewers Expect You to Anticipate

Strong MLOps candidates proactively mention failure modes:

  • a new model passes offline metrics but fails online
  • training data distribution changes silently
  • features differ between training and serving
  • retraining pipelines drift over time
  • rollbacks are impossible because dependencies changed

Interviewers are far more impressed by candidates who anticipate failure than those who describe ideal pipelines.

 

7. The Signal Interviewers Are Extracting in CI/CD Questions

By the end of CI/CD questioning, interviewers are asking themselves:

  • Does this candidate understand ML as a living system?
  • Do they treat deployment as a risk-managed decision?
  • Can they design pipelines that prevent silent failures?
  • Do they value reproducibility and traceability?

Candidates who answer CI/CD questions purely in tooling terms rarely pass.

Candidates who answer them in decision, risk, and system terms almost always advance.

 

Why Section 1 Matters

CI/CD questions are not about pipelines.
They are about discipline.

They reveal whether you:

  • think beyond notebooks
  • respect production complexity
  • understand that ML reliability is fragile
  • can be trusted with automation

This is why CI/CD is often the first filter in MLOps interviews.

 

SECTION 2 - Monitoring & Observability: How Interviewers Test Whether You Can Detect Failure Before Users Do

If CI/CD questions test whether you can ship ML systems safely, monitoring questions test whether you can keep them alive.

This is where MLOps interviews become unforgiving.

Interviewers have seen the same story repeat itself across companies and teams: a model launches successfully, metrics look good initially, and then, quietly, performance degrades. No alerts fire. No dashboards scream. The business notices only when users complain, revenue drops, or regulators ask questions.

Monitoring questions exist because of this pain.

When interviewers ask about monitoring, they are not asking for tools. They are asking whether you understand that ML systems fail silently, and whether you can design observability that surfaces problems before they become incidents.

 

1. Why Monitoring ML Is Fundamentally Different from Monitoring Software

A classic opening question:

“How would you monitor an ML model in production?”

Weak candidates answer like software engineers:

  • uptime
  • latency
  • error rates

All necessary. None sufficient.

Senior MLOps candidates immediately reframe:

“ML monitoring has to cover data, predictions, and outcomes, not just system health.”

This distinction is critical.

Unlike traditional services, ML systems can:

  • return valid responses that are wrong
  • degrade gradually instead of failing abruptly
  • break due to upstream data changes
  • become biased without throwing errors

Interviewers want to know whether you understand that accuracy decay is not a crash, it’s a silent failure mode.

 

2. The Three Monitoring Layers Interviewers Expect You to Cover

Strong candidates consistently organize monitoring into three layers, even if they don’t label them explicitly.

System-level monitoring ensures the service is alive:

  • latency
  • throughput
  • error rates
  • resource usage

Data-level monitoring ensures inputs still make sense:

  • schema changes
  • missing values
  • distribution shifts
  • feature ranges

Model-level monitoring ensures predictions remain meaningful:

  • prediction distribution
  • confidence drift
  • performance proxies
  • outcome alignment (when labels arrive)

When candidates mention only one layer, interviewers assume inexperience.

 

3. Data Drift vs Concept Drift: A Question That Filters Experience

Interviewers often ask:

“How do you detect drift?”

Junior candidates jump to statistical tests.

Senior candidates pause and clarify:

“Are we talking about input drift or concept drift? They require different signals.”

Then they explain:

  • input drift → feature distributions change
  • concept drift → the relationship between features and labels changes

This matters because you can’t monitor what you don’t define.

Senior candidates also mention the uncomfortable truth:

“Drift detection doesn’t tell you what to do, it tells you when to investigate.”

This shows realism.

 

4. Why Accuracy Is a Terrible First-Line Monitor

Another common question:

“Do you monitor accuracy in production?”

Weak answer:

“Yes.”

Senior answer:

“Eventually, but accuracy often arrives too late.”

Senior engineers explain that:

  • labels may be delayed
  • labels may be noisy
  • labels may be unavailable

So they monitor proxies first:

  • prediction confidence
  • distribution stability
  • feature correlations
  • business KPIs

Accuracy is a lagging indicator.
Good monitoring relies on leading indicators.

This distinction is subtle, and highly valued.

 

5. Alerting Philosophy: Why Too Many Alerts Mean No Monitoring at All

Interviewers frequently probe alerting:

“What alerts would you set up?”

Junior candidates list dozens.

Senior candidates talk about restraint:

“Alerts should fire only when action is required. Otherwise, teams learn to ignore them.”

They discuss:

  • thresholds tied to business impact
  • anomaly detection with context
  • alert fatigue
  • on-call sustainability

This signals operational maturity.

Monitoring that no one responds to is worse than no monitoring at all.

 

6. The Hardest Question: “How Do You Know the Model Is Still Adding Value?”

This question separates true MLOps engineers from everyone else.

Senior candidates answer by connecting ML signals to business outcomes:

“Ultimately, the model is valuable if downstream metrics improve, reduced fraud loss, better engagement, faster decisions. If those stall or reverse, the model may no longer be helping, even if technical metrics look fine.”

This framing aligns with the system-level thinking described in
Beyond the Model: How to Talk About Business Impact in ML Interviews ,
where monitoring is tied to outcomes, not just statistics.

Interviewers listen closely here because this answer shows ownership beyond the ML team.

 

7. Debugging Through Observability, Not Guesswork

A common follow-up:

“What do you do when monitoring shows degradation?”

Weak answers jump to retraining.

Senior answers slow down:

“I’d first determine whether the issue is data, model behavior, or downstream interpretation. Retraining without diagnosis can make things worse.”

They describe:

  • slicing metrics by segment
  • comparing against baselines
  • checking recent pipeline changes
  • validating assumptions

This shows discipline under pressure.

 

8. Monitoring as a Design Constraint, Not an Afterthought

Senior candidates often mention something that juniors don’t:

“Monitoring needs to be designed alongside the model.”

They explain that:

  • features should be observable
  • predictions should be inspectable
  • decisions should be traceable

If you can’t observe it, you can’t trust it.

Interviewers recognize this immediately.

 

9. The Signal Interviewers Are Extracting in Monitoring Questions

By the end of monitoring discussions, interviewers are asking themselves:

  • Does this candidate expect models to fail?
  • Can they detect issues before users complain?
  • Do they understand drift as inevitable?
  • Can they connect ML health to business health?
  • Do they respect on-call realities?

Candidates who talk only about dashboards and tools rarely pass.

Candidates who talk about failure anticipation, signal quality, and response discipline almost always do.

 

Why Section 2 Matters

Monitoring questions are not about observability stacks.

They are about responsibility.

They reveal whether you:

  • assume success or expect failure
  • think reactively or proactively
  • treat ML as static or dynamic
  • design for accountability

In MLOps interviews, monitoring is where trust is either earned or lost.

 

SECTION 3 - Automation, Retraining, and Human-in-the-Loop: How Interviewers Test Whether You Automate Wisely or Dangerously

Automation is where MLOps interviews separate disciplined engineers from reckless ones.

Everyone agrees that automation is necessary. The disagreement, and the interview signal, lies in how much, when, and under what conditions automation should be applied in ML systems.

Interviewers ask automation and retraining questions because they have lived through the downside: pipelines that retrain blindly, models that auto-deploy regressions, feedback loops that reinforce bias, and systems that drift faster because humans were removed too early.

When interviewers probe automation, they are asking:

“Does this candidate understand that automation amplifies both correctness and mistakes?”

This section shows how senior MLOps engineers answer automation questions with restraint, safeguards, and accountability.

 

1. Automation Is a Multiplier, Not a Goal

A common interview question:

“How would you automate retraining for this model?”

Weak candidates answer by describing schedules:

  • daily retraining
  • weekly pipelines
  • cron jobs
  • triggers on new data

Senior candidates reframe immediately:

“Before automating retraining, I’d define why retraining is needed and what signal indicates degradation.”

This response shows a crucial mindset shift:

  • automation follows intent
  • automation is not inherently good
  • automation without signal is noise

Interviewers listen for this restraint.

 

2. Retraining Triggers: Why Time-Based Schedules Are Usually the Worst Default

Interviewers often ask:

“Should models retrain on a fixed schedule?”

Junior answer:

“Yes, for freshness.”

Senior answer:

“Only if time correlates with drift. Otherwise, I prefer signal-based retraining.”

Senior candidates discuss triggers such as:

  • data distribution changes
  • prediction confidence drift
  • downstream KPI degradation
  • label availability milestones

They explain that time-based retraining:

  • can waste resources
  • can introduce instability
  • can mask real issues
  • can degrade performance silently

This shows experience with real systems.

 

3. The Hidden Risk: Feedback Loops Created by Automation

One of the highest-signal MLOps questions is implicit:

“What can go wrong if retraining is fully automated?”

Senior candidates bring up feedback loops.

For example:

“If model predictions influence the data we collect, automated retraining can reinforce biases or collapse diversity unless we control for it.”

This answer demonstrates:

  • causal awareness
  • systems thinking
  • ethical maturity

Many candidates never mention this, and interviewers notice.

This kind of reasoning aligns closely with concerns discussed in
The New Rules of AI Hiring: How Companies Screen for Responsible ML Practices ,
where automation decisions are evaluated through a risk and responsibility lens.

 

4. Human-in-the-Loop Is Not a Failure - It’s a Design Choice

Interviewers sometimes ask provocatively:

“Why not fully automate this decision?”

Weak candidates feel pressured to say yes.

Senior candidates push back thoughtfully:

“Automation reduces manual work, but some decisions benefit from human oversight, especially when errors are costly or ambiguous.”

They explain that humans add value when:

  • labels are noisy
  • edge cases matter
  • consequences are irreversible
  • ethical or legal considerations exist

This reframes human involvement as a feature, not a limitation.

 

5. Automation Boundaries: Where Seniors Draw the Line

Senior MLOps engineers implicitly define boundaries:

  • automate data ingestion, not data trust
  • automate training, not deployment approval
  • automate evaluation, not decision-making
  • automate alerts, not judgment

They explain that:

“Automation should accelerate safe decisions, not replace them.”

Interviewers find this framing reassuring.

 

6. Safe Automation Patterns Interviewers Like to Hear

Without turning into a checklist, strong candidates often describe patterns such as:

  • shadow training (train new models without deployment)
  • shadow inference (compare predictions silently)
  • staged rollouts
  • manual approval gates
  • automated rollback on regressions

Not as tools, but as risk controls.

This demonstrates that the candidate has operated systems where things went wrong.

 

7. When Automation Is Appropriate (And Interviewers Expect You to Say So)

Senior candidates are not anti-automation.

They explain when full automation makes sense:

  • low-risk decisions
  • reversible outcomes
  • abundant feedback
  • stable data distributions

For example:

“For low-impact personalization, automated retraining and deployment may be appropriate. For credit decisions or fraud blocking, I’d add safeguards.”

This conditional reasoning is exactly what interviewers want.

 

8. The Automation Anti-Pattern: Blind Trust in Pipelines

One of the strongest senior signals is a sentence like:

“I don’t trust automated pipelines unless I can explain their behavior.”

This conveys:

  • accountability
  • ownership
  • caution without fear

Senior engineers trust systems they can reason about, not systems that “just work.”

 

9. The Signal Interviewers Are Extracting in Automation Questions

By the end of automation discussions, interviewers are asking themselves:

  • Does this candidate understand blast radius?
  • Do they expect automation to fail sometimes?
  • Can they design guardrails?
  • Do they know when humans add value?
  • Will they resist pressure to automate prematurely?

Candidates who equate automation with maturity often fail.

Candidates who equate safe automation with maturity almost always pass.

 

Why Section 3 Matters

Automation questions are not about pipelines or schedules.

They are about judgment under amplification.

Automation magnifies decisions.
MLOps engineers are hired to ensure it magnifies the right ones.

This is why Section 3 often determines whether candidates are trusted with ownership or restricted to implementation roles.

 

SECTION 4 - End-to-End MLOps Scenarios & Failure Stories: How Interviewers Judge Ownership, Not Perfection

If CI/CD tests discipline, monitoring tests vigilance, and automation tests restraint, then end-to-end MLOps scenarios test ownership.

This is the stage of the interview where technical correctness alone stops mattering.

Interviewers now want to know:

“Has this person actually lived with an ML system when things went wrong?”

Because in real production environments, something always goes wrong.

Senior MLOps interviews increasingly revolve around scenario-based and behavioral questions that sound deceptively simple:

  • “Tell me about a production ML failure.”
  • “Describe a time a model degraded unexpectedly.”
  • “How did you handle a pipeline breaking at scale?”
  • “What would you do if a model passed offline metrics but failed online?”

These questions are not about storytelling polish. They are about decision-making under uncertainty.

 

1. Why Failure Stories Matter More Than Success Stories

Junior candidates prefer to talk about successful deployments.

Senior candidates talk about failures, calmly, analytically, and without defensiveness.

Interviewers ask failure questions because:

  • success often hides luck
  • failure exposes process
  • failure reveals judgment
  • failure shows accountability

A strong senior answer does not glorify the failure.
It dissects it.

 

2. The Structure Senior Candidates Use to Answer Failure Questions

Strong answers follow a consistent internal structure, even if not explicitly stated:

  1. Context - what system was this?
  2. Signal - how did you realize something was wrong?
  3. Impact - who or what was affected?
  4. Diagnosis - how did you isolate the cause?
  5. Decision - what did you do next?
  6. Learning - what changed afterward?

This structure signals composure and control.

Interviewers are not listening for drama.
They are listening for process.

 

3. Example: “Tell Me About a Model That Failed in Production”

A weak answer:

“The model failed because the data changed.”

A senior answer:

“We noticed prediction confidence drifting before accuracy metrics arrived. Investigation showed a change in upstream feature generation that altered distributions. We rolled back to a previous model, fixed the pipeline, and added schema validation to prevent recurrence.”

Why this works:

  • failure is detected early
  • cause is specific
  • response is measured
  • improvement is systemic

Senior candidates always close the loop.

 

4. How Seniors Talk About Mistakes Without Losing Credibility

A common fear candidates have is that admitting mistakes will hurt them.

In reality, not admitting mistakes is far worse.

Senior candidates say things like:

“In hindsight, we underestimated how fragile the data pipeline was.”
“We trusted offline metrics too much.”
“We didn’t monitor the right leading indicators initially.”

These statements signal:

  • humility
  • realism
  • growth mindset

Interviewers trust candidates who acknowledge blind spots more than those who pretend systems are perfect.

 

5. The “Offline vs Online” Trap (A Favorite Interview Scenario)

Interviewers often ask:

“What would you do if a model performs well offline but poorly online?”

Senior candidates don’t jump to retraining.

They slow down:

“I’d first check whether the evaluation data reflects production conditions. Then I’d look for training-serving skew, delayed labels, or feedback loops before touching the model.”

This answer demonstrates:

  • skepticism of metrics
  • respect for data pipelines
  • disciplined debugging

This exact pattern of reasoning is often highlighted in
End-to-End ML Project Walkthrough: A Framework for Interview Success ,
where interviewers prioritize diagnosis over reaction.

 

6. Handling Pipeline Failures Under Pressure

Another scenario interviewers like:

“What if a retraining pipeline breaks silently?”

Junior candidates answer reactively:

“I’d fix the pipeline.”

Senior candidates answer preventatively:

“Silent failures are dangerous. I’d design the pipeline to fail loudly, add health checks, and monitor retraining freshness so issues surface quickly.”

This reframes the question from firefighting to system design maturity.

 

7. Ownership Language Interviewers Listen For

Senior candidates naturally use ownership-oriented language:

  • “I was responsible for…”
  • “We decided to…”
  • “I owned the rollback…”
  • “I changed the process…”

Junior candidates use distancing language:

  • “The team did…”
  • “The model failed…”
  • “The data was bad…”

Interviewers notice this difference immediately.

Ownership language signals accountability.

 

8. When You Haven’t Seen a Large-Scale Failure (And How to Answer Honestly)

Some candidates worry they lack “big failure” stories.

Senior-style honesty looks like:

“I haven’t seen a catastrophic failure, but I’ve seen early warning signs. In one case, monitoring showed drift early, and we adjusted before impact.”

This is acceptable, and often preferred.

Interviewers value risk awareness, not just scars.

 

9. The Ultimate Signal: Designing Systems That Learn from Failure

The strongest answers end with systemic change:

  • new alerts
  • better validation
  • clearer ownership
  • safer automation
  • improved documentation

This shows that failure led to organizational learning, not just a fix.

Interviewers want engineers who improve systems over time.

 

10. The Signal Interviewers Are Extracting in Scenario Questions

By the end of Section 4–style questions, interviewers ask themselves:

  • Does this candidate expect things to break?
  • Do they detect problems early or late?
  • Do they blame or diagnose?
  • Do they patch or redesign?
  • Would I trust them during an incident?

Candidates who answer with polish but no substance fail.

Candidates who answer with clarity, humility, and structure advance.

 

Why Section 4 Is Often the Deciding Factor

Many candidates survive CI/CD and monitoring questions.

Far fewer demonstrate calm ownership under failure.

This is why scenario-based MLOps questions are often the final filter for senior and staff-level roles.

They reveal how you behave when:

  • metrics lie
  • automation backfires
  • alerts fire at 2 a.m.
  • business pressure increases

Senior MLOps engineers are hired not because they prevent all failures, but because they handle them responsibly.

 

CONCLUSION - MLOps Interviews Are Not About Pipelines. They’re About Accountability.

By the time an MLOps interview reaches CI/CD, monitoring, automation, and failure scenarios, interviewers are no longer evaluating technical competence in isolation.

They are evaluating trustworthiness under operational pressure.

Every question in an MLOps interview exists because something broke in the past:

  • a model degraded silently
  • a pipeline retrained garbage
  • an automated rollout caused harm
  • alerts fired too late
  • no one knew who owned the failure

Hiring teams have learned, often the hard way, that ML systems don’t fail loudly. They fail quietly, gradually, and expensively.

This is why MLOps interviews feel heavier than traditional ML interviews.

They are not testing:

  • how fast you can ship
  • how clever your architecture is
  • how many tools you know

They are testing:

  • how you think about risk
  • how you design for failure
  • how you balance automation with oversight
  • how you connect ML health to business impact
  • how you behave when systems degrade under pressure

Senior MLOps engineers don’t promise perfect pipelines.
They promise predictable behavior, early signals, and controlled failure.

Candidates who pass these interviews consistently demonstrate:

  • restraint over enthusiasm
  • structure over improvisation
  • ownership over deflection
  • prevention over firefighting

This mindset is exactly what modern hiring loops look for, especially as discussed in
Scalable ML Systems for Senior Engineers – InterviewNode ,
where accountability and operational maturity outweigh model novelty.

If you approach MLOps interviews with this perspective, your answers stop sounding like explanations, and start sounding like decisions.

 

THE MLOps INTERVIEW CHECKLIST (Use This Before Every Interview Round)

This checklist is designed to anchor your thinking under pressure. It’s not a memorization aid, it’s a mental posture check.

 

Before Answering Any MLOps Question

  • Pause and clarify scope.
  • Ask yourself: What could break here?
  • Identify who is impacted if it fails.

 

CI/CD Questions

  • Treat deployment as a decision, not a step.
  • Mention data, model, and code versioning.
  • Talk about evaluation gates and rollback.
  • Avoid tool-centric answers without context.

 

Monitoring & Observability

  • Cover system, data, and model layers.
  • Treat accuracy as a lagging signal.
  • Emphasize leading indicators and drift.
  • Design alerts for action, not noise.

 

Automation & Retraining

  • Automate based on signals, not schedules.
  • Define blast radius before full automation.
  • Add human review where impact is high.
  • Watch for feedback loops and bias amplification.

 

Failure & Scenario Questions

  • Describe detection before correction.
  • Take responsibility, not credit or blame.
  • Explain diagnosis steps clearly.
  • End with systemic improvement, not a patch.

 

Language & Tone

  • Avoid absolutes (“always,” “never”).
  • Use calm, deliberate phrasing.
  • Prefer tradeoffs over prescriptions.
  • Admit uncertainty and explain how you’d resolve it.

 

Final Self-Check

After each answer, ask yourself:

“Would I trust this system if I owned it?”

If the answer is yes, you’re thinking like an MLOps engineer, not just an ML practitioner.