Introduction: Why “Build-and-Explain” Interviews Now Define ML Hiring
If ML interviews used to test what you know,
build-and-explain interviews test whether you can be trusted to build real systems.
In 2026, these formats have become one of the most common, and misunderstood, interview styles for ML and AI roles.
Candidates encounter them under many names:
- Live ML coding + explanation rounds
- Implement-and-walkthrough interviews
- Build-a-solution sessions
- Practical ML problem-solving rounds
And most candidates prepare for them incorrectly.
They assume:
“If my solution works, I’ll pass.”
That assumption is why many strong ML engineers fail.
What “Build-and-Explain” Actually Means
In a build-and-explain interview, you are evaluated on two parallel tracks:
- What you build
- Code
- Analysis
- Architecture
- Experiments
- How you explain it
- Decision rationale
- Tradeoffs
- Constraints
- Failure modes
- Adaptability
You can pass with imperfect code.
You cannot pass with opaque reasoning.
Why Companies Prefer This Format in 2026
Hiring teams shifted toward build-and-explain interviews because they expose signals that traditional formats miss:
- Can the candidate reason while executing?
- Do they narrate decisions or hide them?
- Can they adapt when assumptions break?
- Do they prioritize under time pressure?
- Would I trust this person in a real incident?
These interviews simulate actual ML work far better than:
- Whiteboard theory
- Isolated coding puzzles
- Memorized system design templates
The Biggest Misconception Candidates Have
Most candidates optimize for:
- Correctness
- Speed
- Completeness
Interviewers optimize for:
- Judgment
- Ownership
- Communication
- Risk management
This mismatch is fatal.
A candidate who silently builds a correct solution often scores worse than a candidate who builds a simpler solution and explains tradeoffs clearly.
Why These Interviews Feel Harder Than They Are
Build-and-explain interviews are challenging because they:
- Remove the safety net of silence
- Force you to think out loud
- Penalize unexamined assumptions
- Make confusion visible
Candidates often say:
“I know how to do this, I just couldn’t explain it properly.”
That is the evaluation.
The Hidden Question Interviewers Are Asking
Interviewers are not asking:
“Can you build this?”
They are asking:
“What happens when this person builds something ambiguous, under pressure, with real consequences?”
They’re watching for:
- Calm under uncertainty
- Explicit tradeoffs
- Willingness to revise
- Ownership of decisions
- Communication reliability
This is why build-and-explain interviews correlate strongly with onsite and offer outcomes.
Why Strong ML Engineers Still Fail These Rounds
Common failure reasons:
- Coding silently for too long
- Explaining mechanics instead of decisions
- Avoiding tradeoffs
- Panicking when code isn’t perfect
- Over-optimizing for cleverness
- Failing to close the loop
None of these are ML skill gaps.
They are execution + communication gaps.
How to Reframe These Interviews Correctly
Think of a build-and-explain interview as:
A collaborative debugging session where your reasoning is the product.
Your code is evidence.
Your explanation is the signal.
Key Takeaway Before Moving On
In build-and-explain interviews:
- Silence is risk
- Over-explaining is also risk
- Judgment, not perfection, wins
Once you align your preparation to that reality, these interviews become one of the most controllable parts of the ML hiring process.
Section 1: Common Build-and-Explain ML Interview Formats (and What Each Tests)
“Build-and-explain” is not a single interview style.
It’s a family of formats that all share one property:
you must produce something and make your reasoning visible at the same time.
Candidates often prepare generically and fail because each format tests a different combination of execution and judgment.
Below are the most common build-and-explain ML interview formats used in 2026, and what each one is designed to surface.
Format 1: Live Coding + Reasoning (Narrated Implementation)
What it looks like
- Implement a function, pipeline, or ML component
- Code in real time
- Explain decisions while building
- Time-limited
What candidates think it tests
- Coding speed
- Syntax correctness
- Algorithm knowledge
What it actually tests
- Structured reasoning while executing
- Decision checkpoints
- Error recovery
- Communication under pressure
Interviewers listen for:
- Why you choose one approach over another
- Whether you explain complexity tradeoffs
- How you react to mistakes
Silent coding is interpreted as opaque thinking.
Format 2: Build a Baseline, Then Improve It
What it looks like
- Start with a simple ML solution
- Incrementally improve it
- Explain each improvement
- Justify tradeoffs
What candidates think it tests
- Model sophistication
- Knowledge of advanced techniques
What it actually tests
- Prioritization
- Iterative thinking
- Restraint
- Understanding of diminishing returns
Strong candidates:
- Start simple
- Explain why the baseline exists
- Improve only when justified
Weak candidates jump straight to complexity.
Format 3: Debug-and-Explain (Broken Model or Code)
What it looks like
- Given buggy code, failing metrics, or odd outputs
- Asked to diagnose and fix issues
- Explain reasoning throughout
What candidates think it tests
- Debugging skill
- ML knowledge
What it actually tests
- Hypothesis-driven thinking
- Calm under uncertainty
- Ability to prioritize investigations
- Analytical discipline
Interviewers watch:
- Where you start
- What you ignore
- How you validate hypotheses
Jumping randomly between ideas is a red flag.
Format 4: Build a Mini ML System (End-to-End)
What it looks like
- Design and implement a simplified ML system
- Cover data, model, evaluation, deployment
- Explain constraints and tradeoffs
What candidates think it tests
- System design knowledge
- ML breadth
What it actually tests
- Scope control
- Ownership
- Realism
- Production thinking
Strong candidates:
- Define clear boundaries
- Acknowledge what they’re not building
- Focus on critical paths
Over-engineering kills signal here.
Format 5: Evaluation-Focused Build-and-Explain
What it looks like
- Given predictions or metrics
- Asked to compute, analyze, or visualize results
- Explain what conclusions you’d draw
What candidates think it tests
- Metric knowledge
- Statistical understanding
What it actually tests
- Judgment
- Error interpretation
- Business alignment
Interviewers want to see:
- Whether you connect metrics to decisions
- Whether you identify misleading metrics
- Whether you propose next actions
Reciting definitions scores poorly.
Format 6: Experiment Design + Partial Implementation
What it looks like
- Design an experiment
- Implement a small part
- Explain hypotheses and evaluation
What candidates think it tests
- A/B testing knowledge
- Experimental design
What it actually tests
- Scientific thinking
- Causal reasoning
- Risk awareness
- Discipline
Strong candidates:
- State hypotheses clearly
- Define success criteria
- Discuss confounders
Weak candidates rush to implementation.
Format 7: Build-and-Explain With Live Changes
What it looks like
- Midway constraint changes
- New requirements
- Performance limits
- Data shifts
What candidates think it tests
- Flexibility
What it actually tests
- Adaptability
- Decision recalibration
- Emotional control
Interviewers watch:
- Do you panic?
- Do you justify changes?
- Do you re-prioritize calmly?
This format is a stress test.
Why Candidates Fail Across Formats
Most failures happen because candidates:
- Over-focus on correctness
- Under-communicate decisions
- Avoid tradeoffs
- Over-engineer early
- Treat explanation as secondary
Build-and-explain interviews punish that mindset.
A Unifying Insight Across All Formats
Across every build-and-explain format, interviewers ask:
“Can I see how this person thinks while doing real work?”
Your explanation is not commentary.
It’s the primary artifact.
Section 1 Summary
Build-and-explain ML interviews come in many formats:
- Live coding + narration
- Baseline → improvement
- Debugging sessions
- Mini system builds
- Evaluation-driven builds
- Experiment design rounds
- Live constraint changes
Each format tests:
- Execution and reasoning
- Judgment under pressure
- Ownership and realism
Candidates who treat these as coding tests fail.
Candidates who treat them as thinking-in-public exercises advance.
Section 2: The Scoring Rubric Interviewers Use in Build-and-Explain ML Interviews
Build-and-explain interviews feel subjective because the rubric is rarely explicit.
But interviewers are scoring you consistently, just not on what most candidates think.
They are not asking:
- “Did this code run?”
- “Was this the optimal solution?”
- “Did they use the best model?”
They are asking:
“Would I trust this person to build something real, under pressure, with incomplete information?”
That question drives the rubric.
The Core Principle Behind the Rubric
Build-and-explain interviews evaluate dual competence:
- Can you execute?
- Can you explain why you’re executing this way?
If either track collapses, confidence collapses.
Strong candidates understand that explanation is not overhead, it is the signal.
The Five Dimensions Interviewers Actually Score
Across companies and roles, build-and-explain interviews reliably collapse into five weighted dimensions.
1. Problem Framing & Goal Alignment (Highest Weight)
Interviewers evaluate:
- Did you restate the problem clearly?
- Did you identify the real objective?
- Did you clarify constraints before building?
Weak signals:
- Jumping straight into code
- Assuming requirements
- Treating ambiguity as noise
Strong signals:
- Explicit restatement of goals
- Early constraint identification
- Clear success criteria
Interviewers trust candidates who build the right thing, not just something.
2. Decision-Making While Building
This is the heart of the format.
Interviewers listen for:
- Why you chose this approach
- Why you rejected alternatives
- How you prioritized steps
Weak signals:
- “I’m just going to try this”
- Listing options without choosing
- Over-hedging
Strong signals:
- “Given X constraint, I’m choosing Y”
- Explicit tradeoffs
- Conscious simplification
Candidates who avoid decisions look like they need constant guidance.
3. Communication Clarity During Execution
Interviewers are not scoring how much you talk.
They score how useful your explanation is.
Weak signals:
- Narrating keystrokes
- Long, unstructured monologues
- Explaining basics the interviewer already knows
Strong signals:
- Explaining decision checkpoints
- Summarizing progress
- Using simple structure (“First… Next… Finally…”)
If an interviewer can’t follow your reasoning in real time, trust erodes quickly.
4. Error Handling, Adaptability & Recovery
Mistakes are expected.
Your reaction to mistakes is heavily scored.
Weak signals:
- Panic
- Silence
- Defensiveness
- Rushing to hide errors
Strong signals:
- Calm acknowledgment
- Clear diagnosis
- Logical correction
- Explanation of why the fix works
Interviewers often trust candidates more after a well-handled mistake.
5. Judgment, Realism & Production Thinking
Build-and-explain interviews are proxies for real ML work.
Interviewers listen for:
- Awareness of data quality issues
- Latency or scale considerations
- Failure modes
- Monitoring and iteration
Weak signals:
- Idealized assumptions
- Over-engineered solutions
- Ignoring edge cases
Strong signals:
- Naming risks
- Scoping realistically
- Choosing robustness over cleverness
Judgment often outweighs sophistication here.
What Interviewers Are Not Scoring Heavily
Understanding what doesn’t matter helps candidates stop wasting effort.
Build-and-explain interviews usually do not heavily reward:
- Perfect syntax
- Fancy models
- Maximum optimization
- Exhaustive coverage
- Speed for its own sake
Candidates who optimize for these often lose higher-weight signals.
How Interviewers Synthesize Feedback
After the interview, feedback often sounds like:
- “Clear thinker, good tradeoffs, would hire.”
- “Technically capable, but explanation felt messy.”
- “Built something, but decision-making was unclear.”
Notice:
- “Correct” rarely appears.
- “Trust” is implicit everywhere.
Hiring decisions hinge on confidence, not correctness.
Why Imperfect Solutions Still Pass
Strong candidates often:
- Leave parts unfinished
- Choose simpler approaches
- Skip optimizations
They still pass because:
- Their reasoning is visible
- Their decisions are defensible
- Their judgment feels reliable
Weak candidates sometimes finish everything, and still fail, because their thinking remained opaque.
A Simple Internal Check for Candidates
At any point during a build-and-explain interview, ask yourself:
“If I stopped right now, would the interviewer understand why I made each major choice?”
If the answer is no, pause and explain.
That pause is not wasted time.
It’s signal.
Section 2 Summary
Build-and-explain ML interviews are scored on:
- Problem framing
- Decision-making
- Communication clarity
- Error recovery
- Judgment and realism
They are ownership simulations, not coding exams.
Candidates who:
- Make decisions explicit
- Explain tradeoffs
- Handle mistakes calmly
- Build realistically
Earn trust, and advance.
Section 3: Common Failure Patterns in Build-and-Explain ML Interviews (and How to Avoid Them)
Build-and-explain interviews don’t fail candidates loudly.
They fail candidates quietly.
Most rejections happen not because the solution was wrong, but because the interviewer lost confidence while watching the candidate build.
Below are the failure patterns interviewers see repeatedly in 2026, and why they matter far more than candidates realize.
Failure Pattern 1: Silent Building (The “Heads-Down” Trap)
What it looks like
- Candidate starts coding immediately
- Long stretches of silence
- Explanation only after implementation
Why it fails
In build-and-explain formats, silence is interpreted as:
- Opaque reasoning
- Poor collaboration instincts
- Risky execution style
Interviewers think:
“If I can’t see their thinking here, I won’t see it in production.”
How to avoid it
Narrate decision checkpoints, not keystrokes.
Say things like:
- “I’m starting with a simple baseline because…”
- “I’m choosing this structure to keep complexity low.”
Silence is not neutral, it’s negative signal.
Failure Pattern 2: Explaining Mechanics Instead of Decisions
What it looks like
- Line-by-line code narration
- Algorithm definitions
- Re-explaining basics
Why it fails
Interviewers already know how the code works.
What they’re evaluating is:
- Why this approach
- Why now
- Why not something else
Over-explaining mechanics crowds out judgment.
How to avoid it
Shift from how it works to why it’s chosen.
Good explanation:
“This approach trades memory for simplicity, which is acceptable given the constraints.”
Bad explanation:
“This function loops through the array…”
Failure Pattern 3: Over-Optimizing Too Early
What it looks like
- Jumping to advanced models
- Premature optimization
- Over-engineered abstractions
Why it fails
Interviewers interpret this as:
- Poor prioritization
- Inability to scope
- Lack of production realism
They worry you’ll:
- Waste time
- Increase risk
- Build fragile systems
How to avoid it
Start simple, and explain why.
Strong candidates explicitly say:
“I’m starting with a baseline to validate assumptions before adding complexity.”
This aligns closely with principles discussed in What Interviewers Look for in ML Project Reviews (Beyond Accuracy), where decision sequencing matters more than sophistication.
Failure Pattern 4: Avoiding Tradeoffs to Sound “Correct”
What it looks like
- “It depends” answers
- Listing multiple options
- Refusing to commit
Why it fails
Build-and-explain interviews exist to force decisions.
Avoiding tradeoffs signals:
- Low ownership
- Fear of being wrong
- Dependence on guidance
Interviewers think:
“This person won’t move without permission.”
How to avoid it
Make a decision, then own the downside.
Example:
“This approach risks X, but I’m accepting that to gain Y under the time constraint.”
Tradeoffs are not weaknesses, they’re senior signals.
Failure Pattern 5: Panicking When Something Breaks
What it looks like
- Flustered tone
- Rushing fixes
- Ignoring the explanation
- Trying to hide errors
Why it fails
Errors are expected.
Poor recovery is not.
Interviewers extrapolate:
“This is how they’ll behave during incidents.”
How to avoid it
Slow down and narrate recovery:
- Acknowledge the issue
- State a hypothesis
- Apply a fix
- Explain why it works
A well-handled mistake often improves your evaluation.
Failure Pattern 6: Losing Structure as Time Runs Out
What it looks like
- Rushed additions
- Tangents
- No summary
- Abrupt ending
Why it fails
End-of-interview behavior weighs heavily.
Interviewers remember:
- How you close
- Whether you synthesize
- Whether you control scope
A messy ending erases earlier strength.
How to avoid it
Reserve the last 60–90 seconds to:
- Summarize decisions
- Reiterate tradeoffs
- State next steps
Knowing when to stop is a signal of judgment.
Failure Pattern 7: Treating Explanation as Commentary, Not Control
What it looks like
- Explaining after the fact
- Apologizing for decisions
- Speaking reactively
Why it fails
Explanation should drive the build, not trail it.
When explanation feels secondary, interviewers infer:
- Weak ownership
- Lack of intent
- Execution-first mindset
How to avoid it
Use explanation to lead:
- “Before I build this, here’s why…”
- “I’m choosing not to implement X because…”
Explanation is leadership, not narration.
Failure Pattern 8: Trying to Impress Instead of Being Reliable
What it looks like
- Fancy tricks
- Clever shortcuts
- Name-dropping techniques
Why it fails
Build-and-explain interviews reward:
- Reliability
- Predictability
- Sound judgment
Not brilliance theater.
Interviewers think:
“Would I trust this person with a real system?”
How to avoid it
Optimize for:
- Clarity over cleverness
- Simplicity over sophistication
- Trust over impressiveness
Why These Failures Are So Costly
Build-and-explain interviews are trust accelerators or trust destroyers.
Because execution and explanation happen together:
- Weak signals compound fast
- Interviewers have little room to reinterpret intent
- Confidence erodes quickly
That’s why strong ML engineers still fail these rounds.
Section 3 Summary
Common failure patterns include:
- Silent building
- Over-explaining mechanics
- Premature optimization
- Avoiding tradeoffs
- Poor error recovery
- Messy endings
- Reactive explanation
- Impressiveness over reliability
None of these are ML knowledge gaps.
They are judgment and communication failures under execution pressure.
The fix is not more studying.
It’s learning to build while making your thinking legible.
Section 4: Strong vs Weak Build-and-Explain ML Interview Behavior (Side-by-Side Examples)
In build-and-explain interviews, interviewers rarely reject ideas.
They reject how those ideas are built, justified, and adapted in real time.
Below are realistic interview scenarios where the technical solution is similar, but the behavioral signal is radically different.
Scenario 1: Live ML Coding (Baseline Model)
Prompt:
“Build a simple classifier for this dataset and explain your approach.”
Weak behavior
- Starts coding immediately
- Silent for long stretches
- Explains code line by line afterward
- Mentions accuracy at the end without justification
Interviewer interpretation
- Opaque reasoning
- Poor collaboration signal
- Low trust in ownership
“I don’t know how they made decisions.”
Strong behavior
- Pauses to frame:
“I’ll start with a simple baseline to validate assumptions before optimizing.” - Explains feature choice briefly
- Codes while narrating decision checkpoints, not syntax
- Mentions why accuracy is acceptable (or not)
Interviewer interpretation
- Clear thinker
- Good prioritization
- Safe to trust
“I can follow, and rely on, their thinking.”
Scenario 2: Debugging a Broken Pipeline
Prompt:
“This model’s performance dropped in production. Fix it.”
Weak behavior
- Jumps between code sections randomly
- Tries multiple fixes quickly
- Sounds rushed and defensive
- Doesn’t explain hypotheses
Interviewer interpretation
- Panic under pressure
- No diagnostic discipline
- Incident-risk profile
Strong behavior
- States a plan:
“I’ll start by checking data drift before touching the model.” - Forms explicit hypotheses
- Tests one thing at a time
- Narrates what each result means
Interviewer interpretation
- Calm under failure
- Strong debugging instincts
- Reliable during incidents
Scenario 3: Improving an Existing Model
Prompt:
“Improve this baseline and explain your choices.”
Weak behavior
- Immediately suggests a complex model
- Adds features without justification
- Mentions deep learning “for better accuracy”
- Ignores latency or data size
Interviewer interpretation
- Over-engineering tendency
- Weak cost awareness
- Poor scope control
Strong behavior
- Evaluates baseline first
- Identifies bottleneck (e.g., recall vs precision)
- Makes one targeted improvement
- Explicitly rejects complexity for now
Interviewer interpretation
- Senior prioritization
- Business-aware thinking
- Production realism
Scenario 4: Evaluation & Metrics
Prompt:
“Evaluate this model and explain the results.”
Weak behavior
- Recites metric definitions
- Declares success based on AUC
- Doesn’t connect metrics to outcomes
Interviewer interpretation
- Shallow evaluation
- Metric-driven, not impact-driven
Strong behavior
- Questions label quality
- Explains why AUC may mislead
- Chooses a metric aligned to harm
- Suggests next diagnostic step
Interviewer interpretation
- Judgment under uncertainty
- Real-world ML maturity
Scenario 5: Handling a Mid-Interview Constraint Change
Prompt update:
“Latency must be under 50ms.”
Weak behavior
- Tries to force-fit original design
- Defends previous choices
- Appears stressed
Interviewer interpretation
- Rigidity
- Poor adaptability
Strong behavior
- Reframes calmly:
“This changes priorities, I'll simplify the model and trade some accuracy.” - Explains new tradeoff
- Adjusts plan visibly
Interviewer interpretation
- Adaptable under pressure
- Trustworthy decision-maker
Scenario 6: Explaining While Running Out of Time
Prompt:
“We have a few minutes left.”
Weak behavior
- Rushes to add features
- Stops mid-sentence
- No synthesis
Interviewer interpretation
- Poor scope control
- Anxiety-driven behavior
Strong behavior
- Stops building
- Summarizes:
“Here’s what I built, why, and what I’d do next.” - Names one risk and one follow-up
Interviewer interpretation
- Senior judgment
- Strong communication hygiene
Scenario 7: Mistake During Implementation
Prompt:
Code doesn’t work as expected.
Weak behavior
- Apologizes repeatedly
- Tries to hide the issue
- Rushes silently
Interviewer interpretation
- Fragile under failure
Strong behavior
- Acknowledges calmly
- Explains likely cause
- Fixes methodically
- Explains why fix works
Interviewer interpretation
- Resilient
- Incident-ready
What These Examples Reveal
Across all scenarios:
| Weak Signal | Strong Signal |
|---|---|
| Silence | Structured narration |
| Mechanics | Decisions |
| Cleverness | Judgment |
| Panic | Calm recovery |
| Coverage | Prioritization |
| Impressiveness | Reliability |
Interviewers consistently choose the candidate they trust, not the one who builds the most.
Section 4 Summary
Strong build-and-explain candidates:
- Frame before building
- Narrate decisions, not code
- Make and own tradeoffs
- Adapt calmly
- Recover visibly
- Close with synthesis
Weak candidates often:
- Build silently
- Explain too late
- Avoid commitment
- Over-optimize
- Lose structure under pressure
The difference is not ML knowledge.
It’s how well your thinking stays legible while you build.
Conclusion: Build-and-Explain Interviews Are Ownership Simulations, Not Coding Tests
Build-and-explain ML interviews exist because companies no longer want to guess how you’ll behave on the job.
They want to see it.
These interviews are not asking:
- “Can you code this perfectly?”
- “Do you know the best model?”
- “Can you finish everything on time?”
They are asking:
- “Can we see how you think while building?”
- “Do you make decisions under uncertainty?”
- “Do you prioritize correctly when time is limited?”
- “Do you recover calmly when things break?”
- “Would we trust this person with a real system?”
That’s why many strong ML engineers fail these rounds, and why others pass even with incomplete solutions.
Once you understand that explanation is the primary signal and code is supporting evidence, preparation becomes much more focused.
This aligns closely with the broader shift toward judgment-centric ML interviews, especially in formats like live case simulations and project reviews, as discussed in Live Case Simulations in ML Interviews: What They Look Like in 2026.
In 2026, build-and-explain interviews are not about showing brilliance.
They are about demonstrating reliability under execution pressure.
FAQs on Build-and-Explain ML Interviews (2026 Edition)
1. Do I need to finish the entire solution to pass?
No. Finishing matters far less than making your decisions clear and defensible.
2. Is it bad to pause and think before coding?
No. Brief pauses for framing are a positive signal.
3. How much should I talk while building?
Explain decisions and tradeoffs, not every keystroke.
4. What if I make a mistake while coding?
Mistakes are expected. Calm diagnosis and recovery often improve your evaluation.
5. Should I aim for the most optimal solution?
No. Aim for the most reasonable solution under the stated constraints.
6. Is silence ever okay?
Short silence is fine. Long, unexplained silence erodes trust.
7. How do interviewers judge seniority in these rounds?
Through prioritization, tradeoff ownership, and realism, not complexity.
8. Should I explain alternatives I’m not choosing?
Yes, briefly. That shows conscious decision-making.
9. What’s the biggest mistake candidates make?
Building silently and explaining too late.
10. Can I ask clarifying questions?
Yes, and strong candidates do so early.
11. What if I run out of time?
Stop building and summarize decisions, risks, and next steps.
12. Is it bad to choose a simple baseline?
No. It’s often the strongest possible signal.
13. How do interviewers evaluate communication?
By clarity, structure, and usefulness, not eloquence.
14. Should I try to impress with advanced ML techniques?
Only if they’re justified. Impressiveness without judgment hurts.
15. What mindset shift helps the most?
Stop trying to prove intelligence. Start demonstrating ownership.
Final Takeaway
Build-and-explain ML interviews reward candidates who can:
- Think out loud with intent
- Make tradeoffs under constraint
- Explain why as clearly as what
- Recover calmly from mistakes
- Maintain structure under pressure
They are not about perfect execution.
They are about trusting you to build something real when things aren’t clean.
If you prepare to make your thinking visible, not just your output, you turn one of the most intimidating ML interview formats into one of the most controllable.
That’s where offers are won.