Skip to content

FDE Interview Scorecard Template

Purpose

Standardized evaluation rubric for assessing FDE candidates across key competencies. Ensures consistent assessment across interviewers and provides structured decision-making for hiring committees.

When to Use

  • During every FDE interview round
  • For calibration across interviewers
  • For hiring committee decisions
  • For tracking interview performance vs. on-the-job success

How to Use This Scorecard

  1. Use one scorecard per interview round - Each round evaluates different dimensions
  2. Fill out immediately after the interview - Don't wait; details fade quickly
  3. Submit before group discussion - Avoid anchoring bias from others' opinions
  4. Use behavioral anchors - Rate based on what you observed, not gut feel
  5. Write specific examples - "They did X which showed Y" not "seemed good"
  6. Be honest about uncertainty - If you're not sure, say so and why
  7. Compare to the bar, not other candidates - Would you hire this person? Yes/No.

Scoring Scale:

  • 1 (Strong Reject): Clear weakness, would not succeed in role
  • 2 (Reject): Below bar, significant concerns
  • 3 (Hire): Above bar, clear strengths
  • 4 (Strong Hire): Exceptional, significantly exceeds bar

Scorecard: Round 1 - Phone Screen

Purpose: Filter obvious mismatches quickly. 15-30 minute conversation.

Interviewer: _____ Date: _______


Candidate Information

  • Name: _________
  • Position Applied For: _________
  • Resume Review Notes:

Communication Clarity (1-5)

Rating Behavioral Anchor
1 Unclear, hard to follow, verbose, or can't articulate thoughts
3 Clear enough, some rambling but gets point across
5 Crisp, concise, well-structured answers that adapt to audience

Rating: _____ / 5

Evidence:


Interest & Motivation (1-5)

Rating Behavioral Anchor
1 Generic interest, applying broadly, doesn't understand FDE work
3 Reasonable interest, understands basics of client-facing technical work
5 Specific excitement about FDE work, has researched role, asks good questions

Rating: _____ / 5

Evidence:


Key Questions Asked

Q1: "Tell me about a time you had to figure something out with minimal documentation or guidance."

Response quality: ☐ Weak ☐ Adequate ☐ Strong

Notes:

Q2: "What draws you to client-facing technical work vs. pure product development?"

Response quality: ☐ Weak ☐ Adequate ☐ Strong

Notes:

Q3: "Describe a situation where you had to push back on a client or stakeholder."

Response quality: ☐ Weak ☐ Adequate ☐ Strong

Notes:


Red Flags (check if present)

  • Can't articulate why they're interested in FDE work
  • Seems to want remote job, doesn't care about role specifics
  • No examples of working through ambiguity
  • Never pushed back on stakeholders (always says yes)
  • Communication is painful or unclear

Overall Recommendation

  • Pass to Technical Screen - Clear communication, genuine interest, no red flags
  • Reject - Communication issues, misunderstanding of role, or clear mismatch

Justification (required):


Scorecard: Round 2 - Technical Screen

Purpose: Verify baseline technical competence. 60 minute session.

Interviewer: _____ Date: _______


Candidate Information

  • Name: _________
  • Position Applied For: _________
  • Technical Focus Area: ☐ Backend ☐ Frontend ☐ Data/ML ☐ Full-Stack

Technical Competence (1-5)

Rating Behavioral Anchor
1 Major gaps, can't work through basic problems, fundamental misunderstandings
3 Adequate, handles standard problems, some gaps but workable
5 Exceeds bar, strong technical foundation, handles edge cases naturally

Rating: _____ / 5

Evidence:


Problem-Solving Approach (1-5)

Rating Behavioral Anchor
1 Jumps to solutions, doesn't ask questions, stuck without hints
3 Reasonable structure, asks some clarifying questions, makes progress
5 Systematic approach, excellent questions, handles ambiguity well

Rating: _____ / 5

Evidence:


Communication of Technical Thinking (1-5)

Rating Behavioral Anchor
1 Silent coding, can't explain reasoning, uses jargon without clarity
3 Explains what they're doing, reasoning is followable
5 Clear narration of thought process, explains trade-offs, adapts explanation to audience

Rating: _____ / 5

Evidence:


Technical Assessment Details

Problem(s) Given:

Approach Taken:

Solution Quality: ☐ Doesn't work ☐ Works with bugs ☐ Works correctly ☐ Elegant solution

Edge Cases Considered: ☐ None ☐ Some ☐ Comprehensive


Red Flags (check if present)

  • Can't code without constant help
  • Doesn't ask clarifying questions
  • Gives up when stuck
  • Can't explain their reasoning
  • Defensive when given feedback

Overall Recommendation

  • Strong Hire - Significantly exceeds technical bar
  • Hire - Meets or exceeds technical bar
  • No Hire - Below technical bar
  • Strong No Hire - Major technical gaps

Justification (required):


Scorecard: Round 3 - Learning & Reengineering

Purpose: Evaluate how quickly they understand and extend unfamiliar systems. 60 minutes.

Interviewer: _____ Date: _______


Candidate Information

  • Name: _________
  • Position Applied For: _________
  • Codebase/System Used: _________

Learning Velocity (1-5)

Rating Behavioral Anchor
1 Struggled to make progress, needs extensive hand-holding, overwhelmed by unfamiliar code
3 Made reasonable progress, needed some guidance, formed basic mental model
5 Exceeded expectations, quickly grasped system, made meaningful changes independently

Rating: _____ / 5

Evidence:


Rating Behavioral Anchor
1 Random exploration, doesn't use available tools (search, AI, docs), lost in code
3 Uses tools adequately, systematic but slow exploration
5 Effective use of all available tools, strategic navigation, finds key areas quickly

Rating: _____ / 5

Evidence:


Mental Model Formation (1-5)

Rating Behavioral Anchor
1 Can't explain what code does, no coherent understanding, focused on syntax not structure
3 Basic understanding, can explain main flow, some gaps in details
5 Clear mental model, explains architecture and data flow confidently, identifies patterns

Rating: _____ / 5

Evidence:


Communication of Understanding (1-5)

Rating Behavioral Anchor
1 Can't articulate what they learned, vague descriptions, misses key points
3 Explains basics clearly, some details unclear
5 Crisp explanation of how system works, highlights key decisions, asks insightful questions

Rating: _____ / 5

Evidence:


Exercise Details

Phase 1 (15 min): Understand the system

  • How did they approach exploration?
  • What tools did they use?
  • Did they ask clarifying questions?

Notes:

Phase 2 (10 min): Explain it back

  • Quality of explanation: ☐ Weak ☐ Adequate ☐ Strong
  • Accuracy of mental model: ☐ Inaccurate ☐ Mostly accurate ☐ Precise

Notes:

Phase 3 (25 min): Implement a change

  • Change requested:
  • Time to complete: _
  • Quality of implementation: ☐ Doesn't work ☐ Works with issues ☐ Works well
  • Did they understand implications? ☐ No ☐ Somewhat ☐ Yes

Notes:

Phase 4 (10 min): Discuss improvements

  • What would they do differently?
  • Quality of critique: ☐ Superficial ☐ Reasonable ☐ Insightful

Notes:


High Signal Topics Discussed

Evaluation methodology: "How would you know if this is working?"

  • Didn't discuss or weak response
  • Reasonable approach
  • Strong, thoughtful answer

Edge cases: "What could go wrong?"

  • Didn't consider or minimal thought
  • Identified some edge cases
  • Comprehensive edge case analysis

Production awareness: "How would this behave under load?"

  • No consideration
  • Some awareness
  • Strong production mindset

Overall Recommendation

  • Strong Hire - Exceptional learning velocity, would excel with unfamiliar systems
  • Hire - Good learning velocity, can handle unfamiliar systems
  • No Hire - Learning velocity too slow, struggles with unfamiliar systems
  • Strong No Hire - Cannot function without extensive guidance

Justification (required):


Scorecard: Round 4 - Decomposition

Purpose: Evaluate problem-solving approach on ambiguous, real-world problems. 60 minutes.

Interviewer: _____ Date: _______


Candidate Information

  • Name: _________
  • Position Applied For: _________
  • Problem Type Used: ☐ Technical ☐ Non-Technical ☐ Mixed

Problem Decomposition (1-5)

Rating Behavioral Anchor
1 Jumps to solutions, can't break down problem, focused on one aspect only
3 Reasonable structure, identifies key sub-problems, some gaps in decomposition
5 Excellent structure, identifies all key sub-problems, shows clear prioritization logic

Rating: _____ / 5

Evidence:


Clarifying Questions (1-5)

Rating Behavioral Anchor
1 No questions, makes assumptions, jumps to solution without understanding context
3 Asks some questions, misses some key areas, reasonable understanding
5 Excellent questions that uncover constraints, priorities, and context

Rating: _____ / 5

Evidence:


Constraint Awareness (1-5)

Rating Behavioral Anchor
1 Ignores constraints, proposes impractical solutions, no consideration of time/budget/politics
3 Considers some constraints, reasonably practical
5 Deeply considers all constraints (time, budget, client politics, technical debt)

Rating: _____ / 5

Evidence:


Trade-off Articulation (1-5)

Rating Behavioral Anchor
1 Single solution, can't articulate alternatives or trade-offs, "best practice" thinking
3 Mentions trade-offs, reasonable analysis
5 Clear articulation of multiple options with trade-offs, shows judgment on which to choose

Rating: _____ / 5

Evidence:


Problem Given

Problem Statement:


Candidate's Approach

Clarifying questions asked:

Sub-problems identified:

Prioritization logic:

Solution proposed:

Trade-offs articulated:


Red Flags (check if present)

  • Jumped to solution without understanding problem
  • Didn't ask any clarifying questions
  • Ignored constraints (time, budget, resources)
  • Only one solution, no alternatives
  • Couldn't prioritize what to tackle first
  • Proposed impractical or unrealistic approach

Overall Recommendation

  • Strong Hire - Exceptional problem decomposition, would handle ambiguous client situations
  • Hire - Good decomposition, can work through ambiguous problems
  • No Hire - Weak decomposition, struggles with ambiguity
  • Strong No Hire - Cannot structure problems, jumps to solutions

Justification (required):


Scorecard: Round 5 - Hiring Manager

Purpose: Thesis validation and final assessment. 45-60 minutes.

Interviewer: _____ Date: _______


Candidate Information

  • Name: _________
  • Position Applied For: _________

Pre-Interview Thesis

Write this BEFORE the interview based on previous rounds

I believe this candidate would be strong at:

Evidence from previous rounds:

I am uncertain about:

Questions to probe uncertainty:


Ownership Signals (1-5)

Rating Behavioral Anchor
1 Blames others for past failures, waits for direction, "that's not my job" attitude
3 Neutral, reasonable accountability, some ownership examples
5 Clear ownership examples, "I'll figure it out" attitude, takes responsibility for outcomes

Rating: _____ / 5

Evidence:


Pain Tolerance (1-5)

Rating Behavioral Anchor
1 Gave up easily in past situations, needs perfect conditions to work, escalates instead of problem-solving
3 Persisted through challenges, reasonable resilience
5 Thrived in ambiguity, pushed through broken systems, clear examples of grit

Rating: _____ / 5

Evidence:


Client/Stakeholder Management (1-5)

Rating Behavioral Anchor
1 No client-facing examples, uses jargon, doesn't listen, or avoids difficult conversations
3 Some client-facing experience, adequate communication
5 Strong client management examples, navigated difficult situations, built trust

Rating: _____ / 5

Evidence:


FDE Fit (1-5)

Rating Behavioral Anchor
1 Wants well-defined tasks, stable environments, or pure engineering work
3 Understands FDE work, seems reasonably aligned
5 Genuinely excited about FDE work, understands the hard parts and wants them

Rating: _____ / 5

Evidence:


Key Topics Covered

Most challenging client/stakeholder situation navigated:

Time they had to deliver despite inadequate support/resources:

What they're looking for in next role:

Questions they asked about the FDE model:


Thesis Validation

Was my pre-interview thesis confirmed? ☐ Yes ☐ Partially ☐ No

What did I learn that changed my view?


Culture & Team Fit

Would I want to work with this person? ☐ Yes ☐ Unsure ☐ No

Why or why not?


Overall Recommendation

  • Strong Hire - Exceptional candidate, would significantly raise the bar
  • Hire - Good candidate, meets or exceeds bar
  • No Hire - Below bar or concerns about fit
  • Strong No Hire - Clear mismatch or significant concerns

Justification (required):


FDE Mindset Indicators

Based on The FDE Manifesto

Principle What to Look For Signal
No Shortcuts That Compound Do they take the harder right path over the easier wrong one? Do they resist quick fixes?
Own the Product Stack Do they want to understand how things work, not just use them? Do they contribute upstream?
Information as Infrastructure Do they value logging, metrics, documentation? Do they read before asking?
Root Cause Everything Do they accept "it's working now" or dig deeper? Do they document their debugging?
Build Strategic Relationships Do they see support tickets as relationship opportunities? Do they understand organizational dynamics?

Use this throughout all interview rounds to assess alignment with FDE principles. Look for examples in their past work that demonstrate these mindsets.


Calibration Guidance

For New Interviewers

Before you start:

  1. Shadow 3-5 interviews with experienced interviewers
  2. Understand what "good" and "bad" look like for each dimension
  3. Practice filling out scorecards on recordings (if available)

Your first interviews:

  1. Co-interview with an experienced interviewer for 3-5 rounds
  2. Both fill out scorecards independently
  3. Compare scores and discuss differences
  4. Learn where you're too harsh or too lenient

Calibration Practices

Weekly Hiring Huddle:

  • Review recent candidates as a group
  • Discuss where scores diverged
  • Build shared understanding of "the bar"
  • Track: Are people we rated highly performing well?

Submit Scores Before Discussion:

  • Write your recommendation before the group debrief
  • Prevents anchoring on others' opinions
  • Ensures independent assessment

Watch for Bias:

  • Likability bias (charismatic but not capable)
  • Pedigree bias (big tech, top school = automatic good)
  • Recency bias (last interview matters more than it should)
  • Similarity bias (like me = good fit)

Red Flags Everyone Should Reject

Immediate rejection signals:

  • Can't explain their thinking clearly
  • Blames others for all past failures
  • Shows no curiosity about the problem domain
  • Needs everything specified before starting work
  • Dismissive of client concerns or constraints
  • Defensive when receiving feedback
  • Can't provide examples of working through ambiguity

Trust these signals. If you see them, it's a No Hire regardless of other strengths.

When Scores Diverge

If your score differs significantly from others:

  1. Ask yourself: What did I see that they didn't (or vice versa)?
  2. Share specific examples from your interview
  3. Listen to others' examples with open mind
  4. Update your score if evidence warrants

The bar is consistent across interviewers, not relative to what you personally saw.

Calibration Metrics to Track

  • Offer acceptance rate: >70% = good (lower = too slow or offers not competitive)
  • 90-day success rate: >80% = good (lower = bar too low or poor onboarding)
  • Interview-to-offer ratio by channel: Track if certain sources yield better hires
  • Score-to-performance correlation: Do 5s outperform 4s? If not, recalibrate.

Usage Notes

Making It Actionable

Use specific examples, not adjectives:

  • ✅ Good: "When asked about edge cases, they listed 5 specific failure modes and how to handle each"
  • ❌ Bad: "They seemed smart and had good communication skills"

Focus on evidence, not gut feeling:

  • What did they say or do that led to your score?
  • Could someone else read your notes and reach the same conclusion?

Be honest about uncertainty:

  • "I'm not sure if..." is valuable information
  • Better to flag uncertainty than pretend confidence

Common Mistakes

Mistake: Rating everyone 3s

  • Fix: Force yourself to differentiate. What would a 5 do differently than this candidate?

Mistake: Letting one strong area override weak areas

  • Fix: FDEs need multiple competencies. Great technical + poor communication = No Hire

Mistake: Comparing candidates to each other

  • Fix: Compare each candidate to the role requirements, not to other candidates

Mistake: Not filling out scorecard immediately

  • Fix: Block 10 minutes right after interview. Details fade fast.

Mistake: Only positive or only negative notes

  • Fix: Every candidate has strengths and weaknesses. Document both.

Adapting by Level

For Associate FDE candidates:

  • Lower bar on experience, higher bar on learning velocity
  • Look for raw intelligence and attitude over polish
  • Side projects, hackathons = good signal for learning ability

For Senior+ candidates:

  • Higher bar on independence and client management
  • Look for examples of leading complex projects
  • Should demonstrate pattern recognition across problems

For specialized roles (business-facing vs product-facing):

  • Business-facing: Higher weight on communication and stakeholder management
  • Product-facing: Higher weight on technical depth and systems thinking