SQL Interview Questions for FAANG (2026): Real Questions From Meta, Google, Amazon, Apple, Netflix
SQL interview questions asked at FAANG (Meta, Google, Amazon, Apple, Netflix) in 2026 — with runnable answers, common traps, and the patterns interviewers really test.
Overview
FAANG SQL rounds rarely test trivia — they test whether you can decompose a vague business question into a correct query under time pressure. The same six patterns show up in 80% of phone screens: window functions, self-joins, NULL handling, GROUP BY traps, top-N per group, and percent-of-total. The questions below are reconstructed from candidate reports across Meta, Google, Amazon, Apple, and Netflix interviews in 2024-2026, with runnable answers.
Questions & Answers
Q1. Top 5 actively posting users by week Meta · Medium
Prompt: Given a posts(user_id, created_at) table, find the top 5 users by post count for each ISO week of the last 90 days.
SELECT week, user_id, post_count
FROM (
SELECT
DATE_TRUNC('week', created_at) AS week,
user_id,
COUNT(*) AS post_count,
ROW_NUMBER() OVER (
PARTITION BY DATE_TRUNC('week', created_at)
ORDER BY COUNT(*) DESC
) AS rn
FROM posts
WHERE created_at >= CURRENT_DATE - INTERVAL '90 days'
GROUP BY 1, 2
) ranked
WHERE rn <= 5
ORDER BY week DESC, post_count DESC;
Why this matters: Tests three things at once: time bucketing, GROUP BY + COUNT, and ROW_NUMBER for top-N per group. Using LIMIT 5 here is the classic wrong answer (only gets the top 5 across all weeks combined).
Q2. User retention by signup cohort Google · Hard
Prompt: For each weekly signup cohort, compute Day-7 retention (% of users from the cohort who returned 7 days after signup) given users(id, signup_date) and sessions(user_id, session_date).
WITH cohorts AS (
SELECT
id AS user_id,
DATE_TRUNC('week', signup_date) AS cohort_week
FROM users
),
retained AS (
SELECT DISTINCT
c.user_id,
c.cohort_week
FROM cohorts c
JOIN sessions s ON s.user_id = c.user_id
WHERE s.session_date = (
SELECT signup_date + INTERVAL '7 days'
FROM users WHERE id = c.user_id
)
)
SELECT
c.cohort_week,
COUNT(DISTINCT c.user_id) AS cohort_size,
COUNT(DISTINCT r.user_id) AS retained_d7,
ROUND(100.0 * COUNT(DISTINCT r.user_id) / COUNT(DISTINCT c.user_id), 2) AS d7_retention_pct
FROM cohorts c
LEFT JOIN retained r ON r.user_id = c.user_id
GROUP BY 1
ORDER BY 1;
Why this matters: Interviewer is checking whether you handle cohort math without inflating denominators. The LEFT JOIN + DISTINCT pattern is the safe construction; an INNER JOIN here silently drops cohorts with zero retained users and biases the average upward.
Q3. Second-highest order amount per customer Amazon · Medium
Prompt: Given orders(id, customer_id, total), return each customer with their second-highest order. Customers with fewer than 2 orders should be excluded.
SELECT customer_id, total AS second_highest_total
FROM (
SELECT
customer_id,
total,
DENSE_RANK() OVER (
PARTITION BY customer_id
ORDER BY total DESC
) AS rk
FROM orders
) ranked
WHERE rk = 2;
Why this matters: DENSE_RANK vs ROW_NUMBER vs RANK is the trap. If two orders tie for the highest, ROW_NUMBER returns whichever the engine picked first as second; DENSE_RANK correctly returns the next distinct value. State this distinction out loud in the interview.
Q4. Department salary skew Apple · Medium
Prompt: Given employees(id, department_id, salary), find departments where the highest-paid employee earns more than 3x the department median.
SELECT department_id, max_salary, median_salary,
ROUND(max_salary::numeric / NULLIF(median_salary, 0), 2) AS skew_ratio
FROM (
SELECT
department_id,
MAX(salary) AS max_salary,
PERCENTILE_CONT(0.5) WITHIN GROUP (ORDER BY salary) AS median_salary
FROM employees
GROUP BY department_id
) d
WHERE max_salary > 3 * median_salary
ORDER BY skew_ratio DESC;
Why this matters: PERCENTILE_CONT is dialect-specific (PostgreSQL/Snowflake/BigQuery support it; MySQL <8.0 needs a workaround). NULLIF guards against zero-division when a department median is 0.
Q5. Consecutive watch streak Netflix · Hard
Prompt: Given watch_events(user_id, watched_date), find each user's longest streak of consecutive days watching. Skipping a day breaks the streak.
WITH dated AS (
SELECT DISTINCT user_id, watched_date FROM watch_events
),
streaks AS (
SELECT
user_id,
watched_date,
watched_date - (ROW_NUMBER() OVER (
PARTITION BY user_id ORDER BY watched_date
))::INTEGER * INTERVAL '1 day' AS streak_group
FROM dated
)
SELECT
user_id,
MAX(streak_length) AS longest_streak
FROM (
SELECT user_id, streak_group, COUNT(*) AS streak_length
FROM streaks
GROUP BY user_id, streak_group
) g
GROUP BY user_id
ORDER BY longest_streak DESC;
Why this matters: The "row_number minus date" trick is a classic interviewer favorite. Consecutive dates produce a constant streak_group; gaps shift the group. If you have not seen this pattern before, walk through the math out loud — interviewers reward visible reasoning over silent struggle.
Q6. Symmetric friend pairs Meta · Medium
Prompt: Given friends(user_a, user_b) where each row is a one-way request, return the unique pairs of users where both directions exist (mutual friendship).
SELECT
LEAST(f1.user_a, f1.user_b) AS user_x,
GREATEST(f1.user_a, f1.user_b) AS user_y
FROM friends f1
JOIN friends f2
ON f1.user_a = f2.user_b
AND f1.user_b = f2.user_a
WHERE f1.user_a < f1.user_b;
Why this matters: Self-join on the same table with reversed columns is the canonical pattern. The `f1.user_a < f1.user_b` filter prevents listing each pair twice; LEAST/GREATEST normalizes the output.
Common Pitfalls
- Using LIMIT instead of ROW_NUMBER for top-N per group — gives wrong results when groups have different sizes.
- Forgetting NULL handling in COUNT — COUNT(column) skips NULLs, COUNT(*) does not. Senior interviewers test this.
- Mixing aggregate and non-aggregate columns without GROUP BY — dialect-dependent error, sometimes silently wrong on MySQL with ONLY_FULL_GROUP_BY off.
- Confusing WHERE and HAVING — WHERE filters before aggregation, HAVING filters after. WHERE COUNT(*) > 5 is always invalid.
- Cross-joining accidentally — when you JOIN on a non-unique key, row counts multiply silently.
Practice With AI2SQL
The fastest way to lock in these patterns is to write them by hand, then check against AI2SQL output. Generate variations of every question above with different schemas, rewrite them from scratch, and you will have these patterns memorized within a week.
No credit card required
Frequently Asked Questions
How hard is the SQL bar at FAANG companies?
For data analyst, DS, and product analyst roles at Meta/Google/Amazon, expect medium-to-hard SQL on a phone screen: window functions, self-joins, top-N per group, and cohort/retention math. The bar is lower for SWE roles where SQL is a screening filter rather than a primary skill.
Which dialect should I use in a FAANG SQL interview?
Default to ANSI SQL or PostgreSQL syntax unless the interviewer specifies otherwise. Ask before using engine-specific functions (e.g., MySQL JSON_EXTRACT, PostgreSQL DISTINCT ON, BigQuery QUALIFY). Stating the assumption explicitly buys credibility.
Should I optimize my SQL during the interview?
Get a correct answer first, then talk about indexes, partition pruning, or window-function alternatives. Optimizing prematurely without a correct baseline is a common red flag interviewers report.
How can I practice these interview patterns?
Use AI2SQL to generate variations of these patterns against a sample schema, then rewrite the SQL by hand without looking. The describe-then-rewrite loop is faster than passive reading.