SECTION 1 - The Architecture Phase: Designing Your Career Like an ML System Blueprint

Every ML system begins with architectural design, not coding, not data collection, and definitely not hyperparameter tuning. You begin with a blueprint. A conceptual map. A system diagram that articulates what you’re trying to build and why.

Your career requires the same level of architecture.

Most engineers skip this step. They jump directly into “collecting data” (experience) without defining:

  • the long-term direction,
  • the constraints,
  • the objectives,
  • the required capabilities,
  • the type of engineer they want to become,
  • the problems they want to solve.

This is why so many people feel lost after 3, 5, or 10 years.
They built a system without designing it.

Let’s walk through how a research-level ML engineer would architect their career.

 

1. Define the Objective Function (Your North Star)

In ML, your model’s behavior depends entirely on what metric you optimize.

Career growth works the same way.

If your objective function is vague (“get better,” “earn more,” “become senior someday”), the system wanders. But if your objective function is crisp, something like:

  • “I want to become an ML systems engineer working on large-scale distributed training.”
  • “I want to be a product-focused ML engineer working on applied generative models.”
  • “I want to transition into leadership and design ML roadmaps.”
  • suddenly your career decisions have clarity.

Most career stagnation comes from optimizing the wrong metric, or worse, optimizing no metric at all.

 

2. Identify Constraints (They Shape Your Trajectory)

Every ML system has constraints:

  • compute
  • latency
  • cost
  • interpretability
  • throughput
  • reliability

Your career has constraints too:

  • geography
  • family responsibilities
  • available time
  • financial needs
  • skill gaps
  • immigration status
  • risk tolerance

These aren’t limitations, they’re engineering parameters.

Creative engineers thrive not despite constraints, but because constraints sharpen tradeoff reasoning.

 

3. Map Dependencies (Your Career is a DAG, Not a Line)

A common mistake candidates make: they assume careers progress linearly.

But careers behave like directed acyclic graphs, some skills unlock others, some paths require pre-requisites, and some transitions are only possible after certain nodes are activated.

For example:

You can’t become a strong ML system designer without first developing:

  • data intuition,
  • modeling fundamentals,
  • tradeoff awareness,
  • real-world deployment exposure.

You can’t become a tech lead without:

  • communication clarity,
  • cross-functional alignment,
  • roadmap thinking.

Mapping your skill dependencies prevents wasted effort and helps you choose the right next steps.

 

4. Choose the Model Family (Your Identity as an Engineer)

Just like choosing between:

  • CNNs
  • Transformers
  • Gradient boosting
  • Recommender systems
  • Reinforcement learning models

…your career needs a sense of identity architecture.

Are you:

  • a systems-heavy ML engineer?
  • an applied ML engineer?
  • a backend-to-ML hybrid?
  • a research engineer focused on LLMs?
  • a PM-leaning ML strategist?

Choosing a model family doesn’t mean locking in your fate, it means defining your initial inductive biases. You can retrain later, but early architecture reduces noise and accelerates learning.

 

5. Backward Design Your Path (Reverse-Guided Planning)

In ML pipelines, we often work backward from the goal:

  • What does the model need to predict?
  • What data does that require?
  • What transformations must occur?
  • What architecture supports that?

Careers benefit from the same reverse engineering.

If your 3–5 year goal is clear, then:

  • What experiences will you need within 12 months?
  • What projects must you seek out?
  • What teams align with that direction?
  • What skills must you build deliberately?

Backward planning eliminates luck and builds intentionality.

Backward planning is also a core technique in top-tier ML interview prep, explored more deeply in:
➡️Career Ladder for ML Engineers: From IC to Tech Lead

 

The Architecture Phase Determines Everything

Weak careers emerge from accidental paths.
Strong careers emerge from intentional architecture.
Elite careers emerge from continual architectural redesign, just like production ML systems.

Once your architecture is clear, you move to the next essential layer: continuous learning loops, the backbone of a self-improving career.

 

SECTION 2 - The Learning Layer: Designing Inputs, Signals, and Representations for Your Career

If you think about your career as a machine learning system, the first thing you notice is that every system is only as good as its inputs—its data, features, signals, priors, noise filters, and feedback channels. The way you learn, what you consume, who you surround yourself with, the questions you ask, and the situations you expose yourself to—all of these become your training data. And just as in ML, poor data leads to poor models. Rich, diverse, high-signal data produces robust, adaptive, generalizable performance.

Yet most professionals don’t design their “learning inputs” with any intention. They let randomness shape their development. They learn passively, reactively, or accidentally. They consume whatever content reaches them on social feeds, whatever advice their peers happen to offer, whatever habits their environment normalizes. In ML terms, they’re a model trained on convenience sampling: biased, sparse, and misaligned.

Strong engineers think differently.
They design the learning layer of their career as consciously as they’d design an ML feature pipeline.

They examine the quality of information they ingest.
They curate the people they learn from.
They select problems that upgrade their reasoning.
They adjust environments that distort their thinking.
They seek diversity of inputs to avoid cognitive overfitting.
They make learning a deliberate part of their architecture—not an accidental byproduct.

This section explores how to construct the learning layer of your career as intentionally as the data pipeline of a production ML system.

 

Learning as a Data Pipeline, Not an Accident

Think of your learning routine as a pipeline:

  • Sources
  • Filtering
  • Transformation
  • Storage
  • Retrieval
  • Application

Most people treat learning like an unstructured firehose: endless content, no filtering, no post-processing, no integration. But effective career growth requires a pipeline where inputs are filtered for relevance, transformed into understanding, stored meaningfully, and retrieved when needed.

For example:

  • Reading a research paper is the raw data phase.
  • Summarizing it in your own words is the feature extraction phase.
  • Using its ideas in a project becomes the deployment phase.

Passive reading is the least useful form of learning. Transformation - turning information into insight - is what actually improves the model.

Similarly, engineers who grow fastest don’t just “read more.” They convert their learning into frameworks, principles, and mental models that shape their decisions.

 

Avoiding Cognitive Overfitting

Career stagnation often resembles overfitting:

  • You get good at the narrow tasks your current role demands.
  • You optimize for local performance instead of global adaptability.
  • You rely on habitual reasoning patterns.
  • You limit exposure to unfamiliar problems.

Your “model, you’re thinking, stops generalizing.

Strong engineers avoid this by intentionally injecting distributional diversity into their learning. They expose themselves to:

  • unfamiliar domains
  • new technologies
  • business stakeholders
  • ambiguous problem spaces
  • architecture-level thinking
  • constraints they’ve never handled before

This intentional domain mixing strengthens their cognitive flexibility.

It’s identical to why ML models require diverse datasets.
A narrow dataset produces brittle performance.
A diverse one produces robustness.

Humans are no different.

This principle is also highlighted in ML interview performance: candidates who generalize across domains consistently outperform those who rely on narrow templates, a concept explored in:
➡️Pattern Recognition vs. Creativity: What ML Interviews Really Measure

Your learning sources shape your cognitive flexibility.
Your cognitive flexibility shapes your career.

 

Designing High-Signal Inputs

Not all inputs carry the same value.
Most learning materials are noise: surface-level content, duplicated insights, derivative advice, or recycled frameworks.

Research athletes curate for high-signal sources:

  • domain experts
  • deep long-form content
  • real-world case studies
  • design documents
  • postmortems
  • technical deep dives
  • codebases
  • systems built by teams better than their own

They treat their attention as a scarce resource.
They protect the integrity of their input data.

A career shaped by high-signal inputs compounds faster than one driven by algorithmic feeds and shallow content.

You don’t just “level up” faster, you think differently.

 

Feature Engineering Your Mindset

If the data you consume is the raw input, your mindset is the feature engineering layer.

Two engineers can read the same book. One absorbs strategies. The other extracts principles. Two engineers can attend the same meeting. One listens for instructions. The other listens for how decisions are made. Two engineers can debug the same issue. One patches the problem. The other learns a pattern about system behavior.

The difference isn’t knowledge—it’s transformation.

Strong engineers actively transform observations into reusable cognitive features. They ask:

  • What principle does this situation reveal?
  • What pattern is emerging here?
  • How does this apply across domains?
  • What does this say about good engineering judgment?
  • What does this show about organizational incentives?

These transformations become the “feature set” their career model learns from.

Knowledge without transformation is noise.
Transformed knowledge is signal.

 

Learning at the Right Difficulty Level

Just like curriculum learning in ML, where models improve faster when tasks increase gradually in complexity, human skill development depends heavily on the sequence of difficulty.

If you learn random things in random order, you never build conceptual scaffolding.
If you learn only easy things, you stagnate.
If you jump immediately to overly advanced things, you get discouraged.

Strong engineers shape their learning curve intentionally:

  • foundational → applied → interdisciplinary → strategic
  • simple → complex → ambiguous → abstract
  • consumption → practice → performance → teaching

Their learning journey has direction.

The key question isn’t “What should I learn?”
It is “What should I learn next?”

That ordering is everything.
It determines how fast your mind scales.

 

Turning Curiosity Into a System

Curiosity is the raw energy of career growth, but without structure, it disperses. Research athletes build habits around curiosity:

  • weekly exploration sessions
  • problem-driven research
  • curiosity sprints
  • deep dives on concepts they don’t yet understand
  • systematic reading around emerging technologies

Curiosity becomes a scheduled input.
Schedule turns curiosity into skill.
Skill turns curiosity into opportunity.

Engineers who practice structured curiosity become the ones who discover new roles, new technologies, new paths and often new identities.

 

Your Learning Layer Determines Your Future State

In ML systems, the quality of outputs is limited by the quality of inputs and features.

In your career, your future is constrained by:

  • what you choose to learn
  • what you choose to ignore
  • what ideas you transform
  • what environments you expose yourself to
  • what people you seek guidance from
  • what problems you attempt
  • what difficulties you avoid

The choices you make today about your learning pipeline subtly but powerfully shape the engineer you become.

Your learning layer is not a background process.
It is the foundation of your career architecture.

 

SECTION 3 - Feedback Loops: The Engine That Separates Career Drift From Career Optimization

If continuous learning is the data pipeline of your career, then feedback loops are the gradient updates, the machinery that converts experience into progress. Without feedback, your career isn’t optimizing. It’s simply accumulating noise. And noise, when left unprocessed, leads to drift.

Most professionals spend years working, interviewing, or building projects without ever integrating real feedback. They move forward linearly, collecting experience but not direction. Their signal-to-noise ratio steadily declines. Their growth plateaus. They wonder why more years of experience aren’t translating into higher levels of opportunity.

It’s not because they aren’t learning.
It’s because they aren’t learning from correction.

In ML, a model that never receives gradient updates eventually becomes misaligned with reality. Humans are no different. Without feedback loops, careers become stale approximations of what they once were capable of becoming.

Let’s break down how research-minded professionals—people who treat their careers like evolving ML systems, design feedback loops that actually change their trajectory.

 

1. Why Most Professionals Avoid Real Feedback (And Why It Hurts Them)

Feedback is cognitively uncomfortable because it exposes discrepancy, the gap between where you think you are and where you actually are. But optimization requires discrepancy. Without it, there’s no gradient to descend, no direction to improve.

Most people avoid feedback because:

  • It threatens their self-concept
  • It creates emotional friction
  • It forces uncomfortable self-honesty
  • It disrupts comforting narratives of competence
  • It reveals blind spots
  • It introduces uncertainty

So they choose comfort over clarity.
They choose ego over accuracy.
They choose stability over optimization.

But real career growth, especially in the ML and AI world, is brutally meritocratic. It rewards those who can process feedback effectively and punishes those who can’t.

Professionals who grow quickly are not the ones who avoid error.
They are the ones who extract value from error.

This fundamental truth mirrors one of the most important aspects of ML interviews as well, where interviewers often test whether a candidate can modify reasoning when confronted with new constraints. For more on this hidden skill, see:
➡️How to Decode Feedback After a Failed ML Interview (and Improve Fast)

The ability to metabolize feedback is what creates compounding growth.

 

2. The Three Types of Career Feedback (And Why Only One Truly Matters)

Feedback in a career, just like in ML systems, comes in multiple forms. But not all forms are equally useful. High performers learn to distinguish three feedback categories:

A. Outcome Feedback

This is the simplest form: you got promoted, or you didn’t. You passed the interview, or you didn’t. You got the job, or you didn’t.

Outcome feedback is binary and often misleading.
It measures results, not reasoning.

It tells you what happened, not why.

B. Social Feedback

This is feedback from peers, managers, mentors. It can be helpful, but often it’s filtered through politeness, bias, culture, or incomplete information.

Social feedback is noisy.
Sometimes useful, sometimes dangerous.

C. Behavioral Feedback

This is the gold standard.
It measures your choices, your reactions, your reasoning paths, and your decision patterns.

Behavioral feedback is the closest thing to gradient information you get in your career.

It includes questions like:

  • What did you choose to focus on?
  • How did you respond to pressure?
  • How did you weigh tradeoffs in your last role transition?
  • How did your reasoning evolve in the last year?
  • What skill bottlenecks slowed you down most?

This form of feedback generates actionable insights because it examines the function of your career system, not just the outputs.

Professionals who grow quickly collect behavioral feedback deliberately, not just when things go wrong, but especially when things go right.

 

3. How to Build a Personal “Career Monitor” That Mirrors ML Observability

Modern ML systems rely on observability: metrics, logs, monitoring dashboards, alerts, evaluations. Careers need the same.

Research-minded professionals create career observability systems that track:

  • Skill drift
  • Strength emergence
  • Motivation changes
  • Decision quality over time
  • Repeated breakdowns
  • Execution patterns
  • Burnout signals
  • Impact-to-effort ratios

This isn’t journaling.
This is self-instrumentation.

They review:

  • quarterly performance patterns
  • learning velocity
  • what types of problems drain vs energize them
  • what assignments create leverage vs stagnation
  • how often they took on meaningful challenges
  • which skills remain untrained
  • how their thinking has evolved

Over time, they build a detailed internal telemetry system.
This mirrors ML monitoring pipelines, where drift detection signals when the model must be retrained.

Careers without monitoring drift into misalignment:
wrong roles, wrong goals, wrong skill prioritization.

Careers with monitoring stay calibrated.
Because they can see themselves clearly.

 

4. Feedback Should Be Processed Like Model Gradients, Not Taken Personally

Professionals who stagnate treat feedback as judgment:
“I’m not good enough.”
“I’m failing.”
“I’m disappointing people.”

Professionals who evolve treat feedback as information:
“This reveals a constraint.”
“This exposes a blind spot.”
“This highlights an optimization opportunity.”

The difference is identity.
One protects the ego.
The other protects the system.

In ML, gradient updates don’t shame the model—they refine it.

Feedback is not an evaluation of self-worth.
It is a signal that can refine your trajectory.

Once you adopt this mindset, feedback stops being emotional and becomes strategic.

 

5. The Tightest Feedback Loop Wins (Career Version of Fast Gradient Updates)

Models that train with faster, cleaner gradient updates converge faster.

So do careers.

Professionals who seek feedback:

  • yearly → stagnate
  • quarterly → maintain
  • monthly → progress steadily
  • weekly → accelerate
  • daily → transform

Fast feedback loops create compounding improvement.
They prevent drift.
They produce clarity.
They uncover blind spots quickly.
They keep learning dynamic.
They prevent career stalling.

A career with slow feedback moves like batch gradient descent, slow, lumbering, data-hungry, and brittle.

A career with fast feedback moves like stochastic gradient descent, adaptive, iterative, continually improving.

Your performance improves not from grand moments of insight but from frequent, small course corrections.

 

SECTION 4 - Optimization Loops: Turning Career Growth Into a Self-Improving ML System

If you observe the careers of top ML engineers, those who move from junior to senior roles quickly, those who transition seamlessly across domains, those who stay relevant across technological shifts, you will notice something striking: their careers behave like optimized systems. They don’t drift. They don’t stagnate. They don’t fear change. Their growth curves aren’t accidental, they are engineered.

Most people treat their career like a series of disconnected events.
High performers treat their career like an iterative optimization loop.

Like ML systems, career trajectories can degrade without monitoring, lose performance without fine-tuning, drift without feedback, and stagnate without new data. But with the right feedback loops, guardrails, and continuous retraining, your career becomes an adaptive engine, capable of evolving even as the industry transforms around you.

The key difference is intentionality.
Weak career systems react.
Strong career systems adapt, learn, and optimize.

This section breaks down how top engineers construct an optimization loop around their professional development, using the same principles they apply when tuning real ML pipelines.

 

1. They Treat Every Work Cycle as a Training Cycle

Your career, like an ML model, learns from data.
The question is: what data do you feed it?

Weak performers feed their career passive data:

  • another quarter of routine tasks
  • another year of maintaining legacy systems
  • another performance review with vague feedback

This is the equivalent of training a model on stale, repetitive data, it won’t generalize, it won’t improve, and it won’t survive distribution shift.

High performers instead feed their career targeted, diverse, high-signal data:

  • technically ambiguous problems
  • new system designs
  • cross-functional collaborations
  • forward-facing architecture discussions
  • customer-impact projects
  • deep postmortem participation
  • mentorship opportunities
  • domain shifts that stretch intuition

Each cycle provides new “training examples” that build nonlinear growth, not linear drift.

They don’t wait for opportunities—they create training cycles intentionally.

This aligns strongly with principles found in career-development strategies discussed in:
➡️Career Ladder for ML Engineers: From IC to Tech Lead

A career without structured training cycles decays.
A career with continuous training cycles compounds.

 

2. They Build Career Metrics the Same Way They Build ML Metrics

Most people track their career with meaningless metrics:

  • job title
  • years of experience
  • number of projects
  • lines of code
  • team size

These are vanity metrics.

High performers choose optimization metrics that reflect actual growth:

  • Rate of skill acquisition
  • Depth of systems understanding
  • Breadth of domain exposure
  • Impact velocity (speed from idea → execution → outcome)
  • Failure-mode awareness
  • Communication clarity in technical settings
  • Ability to guide architectural decisions
  • Adaptability to new technologies

These metrics actually predict seniority.

Just like ML evaluation metrics, career metrics must:

  • represent real performance
  • align with long-term goals
  • capture quality, not quantity
  • predict real-world outcomes

Once you choose the right metrics, your actions naturally shift toward improvement.
Metrics shape behavior, just like in ML.

 

3. They Use Feedback Like Gradient Signals, Not Like Criticism

In an ML system, gradients tell you:

  • what direction to move
  • how much to adjust
  • where loss is coming from
  • which parameters need updating

Most people treat career feedback emotionally, they feel judged or defensive.
High performers treat feedback mathematically, they see gradients.

Their inner dialogue sounds like:

“What is this signal telling me to adjust?”
“What parameter needs to be tuned?”
“What environment moved?”
“What’s the underlying loss function?”
“What does this teach me about how I’m perceived?”

Instead of rejecting feedback, they instrument it.
Instead of fearing mistakes, they measure them.
Instead of getting offended, they optimize.

This transforms feedback from something painful into something actionable.

And because they optimize faster than their peers, their careers accelerate.

 

4. They Treat Mentorship Like Transfer Learning

Trying to grow without mentors is like training a massive model from scratch on a laptop, it’s technically possible but horribly inefficient.

High performers optimize using “pretrained weights”, the insights of:

  • senior engineers
  • cross-team architects
  • product thinkers
  • researchers
  • engineering managers
  • tech leads at other companies

Instead of reinventing emotional and professional struggles, they load pretrained “model weights” from people who have already encountered the same challenges.

This dramatically speeds up convergence.

Effective mentorship is transfer learning for your career.

Just as a model fine-tunes on your task, your career fine-tunes using the accumulated experience of others.

 

5. They Monitor for Drift-Before Performance Drops

ML systems degrade over time.
So do careers.

Career drift happens when:

  • your skills no longer match the market
  • your work becomes repetitive
  • your growth stalls silently
  • you stop learning new abstractions
  • industry tech shifts but you don’t
  • you become “the person who maintains X”
  • your curiosity dulls
  • your outputs become predictable

High performers treat drift as a measurable risk.
They monitor their own relevance:

“What did I learn this quarter?”
“What new abstractions did I gain?”
“What new ML or systems patterns did I internalize?”
“What would break if I had to interview at a FAANG company tomorrow?”
“Did I become 10% sharper or 10% slower?”
“What frontier skills am I currently missing?”

Career drift is subtle but deadly.
The antidote is continuous monitoring and adjusting before performance visibly drops.

 

Conclusion - Your Career Is a Living System. Design It Like One.

If there’s one idea that sits at the center of this entire philosophy, it’s that your career is not a timeline, it is a system. And like any ML system, it must be continually refined, retrained, monitored, debugged, optimized, stress-tested, and aligned with changing requirements.

Yet most professionals treat their careers like static artifacts.
They pick a job, perform tasks, wait for opportunities, hope for promotions, and react to circumstances. They behave as if careers grow linearly, automatically, predictably.

But strong engineers, the ones who rise fastest, adapt easiest, and stay relevant, treat their careers the way ML teams treat production systems:

  • continuously learning
  • continuously correcting
  • continuously optimizing
  • continuously aligning
  • continuously experimenting

They don’t ask:
“What title should I chase next?”
They ask:
“What constraints am I operating under?”
“What signal am I producing?”
“What feedback loops do I have?”
“What failure modes should I anticipate?”
“What capabilities should I expand?”
“What part of the system is degrading?”

Careers stagnate when feedback weakens, when drift goes unnoticed, when misalignment persists, and when assumptions remain unchallenged.
Careers accelerate when you deliberately design the loops that guide your growth.

Because the truth is simple:
The strongest careers are engineered. They don’t happen by accident.

If you approach your career like an ML system, you’ll naturally build:

  • the ability to learn faster than others
  • the humility to evaluate yourself honestly
  • the strategy to optimize what matters
  • the resilience to handle drift and setbacks
  • the alignment to choose the right problems
  • the wisdom to reinvent yourself when needed

And most importantly, you’ll stop being reactive.
You’ll become the architect of your professional trajectory, not the passenger.

Your career is a system.
A dynamic, evolving, high-dimensional system.

You can let the world train it for you or you can train it yourself.

 

FAQs 

 

1. What does it mean to design a career like an ML system?

It means you model your career as a living structure with inputs (projects, mentorship), outputs (impact, skills), evaluation metrics (growth, satisfaction), constraints (time, environment), and feedback loops. This lens gives you control instead of relying on chance.

 

2. Why is continuous learning essential in this framework?

Because your “model weights” - your skills, knowledge, context, and adaptability, decay without retraining. Technology evolves too quickly for static skillsets. Continuous learning prevents cognitive drift.

 

3. What counts as feedback in a career system?

Feedback includes performance reviews, peer opinions, interview failures, promotions, rejections, recruiter messages, job market trends, and even your own emotional reactions to work. All of it is data.

 

4. How do I detect drift in my career?

Career drift happens when your day-to-day work diverges from your long-term goals. If your projects, skills, or responsibilities no longer match where you want to go, drift has begun.

 

5. How can I add better feedback loops to my career?

You can seek mentorship, request more frequent reviews, conduct quarterly self-assessments, analyze interview feedback, or regularly compare your skillset to job descriptions of roles you want.

 

6. What is a “career metric,” and how do I define mine?

Career metrics are your success indicators: impact, learning speed, compensation, work-life balance, autonomy, domain mastery, or leadership growth. They anchor your decisions and tradeoffs.

 

7. Should my career be optimized for one metric or several?

Single-metric optimization is fragile. Multi-objective optimization (e.g., learning + compensation + fulfillment) produces stability. But weights change by career stage and that’s normal.

 

8. What does “debugging your career” look like?

It means analyzing recurring failures (e.g., interview breakdowns, project slowdowns), identifying patterns, and implementing corrective strategies, just like debugging ML pipeline bottlenecks.

 

9. How does experimentation help career growth?

Experimentation reduces fear and increases adaptability. Taking on atypical projects, exploring new tools, or switching domains builds versatility, similar to data augmentation for skill diversity.

 

10. Can I over-optimize my career?

Yes, over-optimization can create tunnel vision. Just like ML models that overfit, careers can overfit to narrow goals, roles, or environments. Regular exploration prevents this.

 

11. What are career failure modes I should watch for?

Common failure modes include stagnation, role mismatch, burnout, poor alignment with leadership, skill decay, complacency, bad managers, and lack of strategic direction.

 

12. How do I build an effective growth loop?

A good loop includes:
Action → Feedback → Reflection → Adjustment → New Action
Repeated deliberately, this produces exponential growth.

 

13. What’s the biggest mistake people make when designing their career?

They optimize for titles instead of trajectories. Titles change slowly. Skills, impact, and market positioning change fast. Chase slope, not checkpoints.

 

14. Is switching jobs necessary to optimize my career?

Not always. Some systems improve with tuning, not replacement. But if your environment limits learning or feedback, switching becomes the cleanest optimization.

 

15. What’s the “north star” metric for career design?

It varies, but the most universal metric is “rate of improvement.”
If you’re improving fast, everything else compounds, skills, compensation, opportunity, and career durability.