What actually changed in June 2025
For years Canva ran a "Computer Science Fundamentals" screen. Standard FAANG-shape: a problem on a shared editor, no tools, solve it in front of someone. Then in June 2025 Simon Newton, Canva's Head of Platforms, posted "Yes, You Can Use AI in Our Interviews" on the Canva engineering blog and replaced the screen for backend, frontend, and ML engineering roles.
The reasoning was unusually honest for a company blog post: "almost half of [Canva's] frontend and backend engineers are daily active users of an AI-assisted coding tool," and the old screen was asking candidates to solve problems "without the very tools they'd use on the job." That's the right reason. Most companies that allow AI in interviews do it because banning it is unenforceable; Canva did it because the old format had stopped measuring the job.
The example pivot Canva published is the giveaway. Old problem: "implement Conway's Game of Life." New problem: "build a control system for managing aircraft takeoffs and landings at a busy airport." That's not a softer problem. It's a harder one — you have to scope it, decompose it, decide what to build first, and read the code that comes back. The algorithm bar didn't move; the engineering bar moved up.
Three pillars. Canva publishes them. Most candidates still don't read them.
Canva's interviewer-facing post, "AI Interview Success: An Interviewer's Inside Guide", names the three things you're being graded on. They are:
- Engineering Problem-Solving. Decomposition, scoping, edge cases, talking through what you'd build first and why. This is the same fundamentals signal as the old screen, just measured against a richer problem.
- Technical Depth & Ownership. Can you read the code the AI generated, justify the architecture, hold it to production standards, push back on output that's wrong or sloppy? This is the new pillar. It rewards seniority. It punishes copy-pasting.
- AI Collaboration Effectiveness. Strategic prompting plus critical review. Note the "plus." Strategic prompting alone isn't the signal — they want to see you reject AI output as often as you accept it.
Canva's own framing, verbatim: "we're assessing your engineering capabilities enhanced by AI collaboration, not your AI skills in isolation." That sentence is the entire rubric. If your prep plan is "learn to prompt better," you're prepping for the wrong interview.
Three failure modes — also published, also under-read
The same post names the three ways candidates fail. Canva's terminology, my commentary.
1. AI Showcase
The candidate prompts, accepts, prompts, accepts, without ever stopping to read what came back. Looks productive on the surface. The interviewer pauses, asks "what does this code do?" and the candidate freezes. Round over.
The fix is counterintuitive: use AI less, not more. Use it for the boring scaffolding — the 30-line setup you've written a hundred times. Then turn it off and design the actual hard piece in your own head, out loud, before asking the tool to draft anything. The interviewer is grading the decisions, not the keystrokes.
2. Feature Marathon
The candidate races to ship as many features as possible. Plumbing everywhere, no error handling, no tests, no thinking about edges. They leave the room with the most code on screen and the lowest score.
Speed isn't the signal. Canva's interviewers explicitly value fewer features done well — including the unhappy paths, the ambiguous inputs, the resource limits. If the prompt is "build an airport scheduler" and you've shipped takeoffs but not handled the case where two flights land at the same instant, you have zero features done, not one.
3. Hands-Off
The candidate over-delegates. AI writes a function, candidate moves on. AI suggests an architecture, candidate accepts. No critical review, no rewrites, no "actually, no." This is the lowest-status failure mode because it shows the candidate has already lost the collaboration.
You're auditioning for the role of senior engineer who keeps an AI in line, not the AI's assistant. Reject something out loud at least once. Even something small. "This handles the happy path but ignores the case where the runway is occupied — let me rewrite it."
The "what does this code do?" pause
Across multiple candidate reports the same moment shows up. The interviewer waits for the AI to finish generating, lets the candidate accept the change, and then asks: what does this code do?
That question is not a test of memory. It's a test of whether you read the output. If you can answer it cold — line by line, with the trade-offs and the missing cases — you've passed the highest-weighted signal in the round. If you can't, no amount of further prompting saves you. The interviewer has their answer.
Practical implication: budget review time after every generation. Read the code. Out loud is fine. The "wasted" 60 seconds reading is the most leveraged minute of the interview.
The other rounds
The AI-mandatory product round is the headline, but it's one of four onsite rounds. The rest:
- Second coding round. Closer to classic algorithmic shape. JavaScript snippets, time complexity, event-loop questions, non-blocking concurrent requests have all shown up in 2025 candidate reports. Same AI policy. Don't put the tool down — but don't lean on it for problems where you should know the answer.
- System design. A reported example: "design a scalable system to handle image uploads with processing and storage optimized for performance." Suspiciously on-brand for the company that does this for a living. Treat it as a hint about what kinds of systems they care about — file pipelines, async processing, CDN strategies, transcoding budgets.
- Culture interview. Mapped to Canva's six values. "Be a good human" and "Aim for pragmatic excellence" are the two cited most often. Treat this like Amazon's Leadership Principles: bring two stories per value, in STAR format, that you can deliver in under 90 seconds. The candidates who dismiss this round get rejected by it.
The tech stack tax
Canva is a TypeScript + React shop on the front (with MobX, post a 2017 migration off vanilla JS), Java-primary on the back, Python for ML, and a WebGL rendering engine in the editor itself. If you're given language choice in the AI round, pick the one that matches what you'll actually do at the company. There's no "TypeScript penalty" but there is a credibility tax to walking into a Canva interview and writing the round in C++. The interviewer is now wondering whether you'd be productive on day one.
Speculative but worth flagging: third-party prep guides put the AI round at roughly 60–90 minutes with a multi-file codebase task in the few-hundred-LOC range. Canva hasn't published these numbers. Treat them as ballpark, not contract.
How I'd actually prep
Most "AI interview prep" advice is bad. It tells you to practice prompting. Canva is grading the part where you stop prompting. So:
- Pick one tool and live in it for two weeks. Cursor, Copilot, or Claude — Canva names all three. Pick whichever your day job uses. Switching tools in the room is a disaster.
- Practice the read-aloud habit. Every time the AI generates something, before you accept, say what it does in plain language. Out loud. To yourself. This is the muscle that "what does this code do?" is testing.
- Practice rejecting output. Force yourself to discard at least one suggestion per session, even when it's correct. Say why. Rewrite it. The instinct to keep "good-enough" code is the instinct that loses the Hands-Off comparison.
- Run multi-stage practice problems. Canva's questions are designed so a single prompt won't solve them. Practice on problems that escalate — first stage trivial, fourth stage uncomfortably hard. The decomposition signal is what they're after.
- Prep the culture round like a senior. Six values, two stories each, 12 stories total. Most candidates show up with three.
The honest part
Canva's policy is the most coherent of any company I've looked at. They didn't allow AI because banning it was unenforceable. They didn't allow it for marketing. They built a rubric, ran shadow interviews, recorded mocks, and published both the pillars and the failure modes before the policy went live. That's an ATC Sydney talk by Joel Knudsen (Canva's Global TA Lead, Engineering) describing exactly that process. Most companies don't treat interview redesign as an engineering project. Canva did.
The uncomfortable consequence: the bar is higher than it was. Old format rewarded knowing data structures. New format rewards decomposition, knowing what to ask for, reading code critically, and knowing when to throw the generated output away. Most of those are senior skills. Junior candidates who relied on memorized algorithms now have to demonstrate judgment.
That's not a bug. That's the design.