Company guide · Rippling

Rippling lets you use AI in every round.
The rubric quietly changes when you do.

Rippling has the most candidate-friendly AI policy of any company we've looked at. It's also the one most likely to get you rejected if you take it at face value. What's actually in the 90 minutes that matter, what the rubric looks like in practice, and the candidate post on Blind that explains the policy better than the policy does.

What Rippling tells you, and what they actually mean

Rippling tells candidates the same thing on every coding round. Use whatever tools you want. Bring your IDE, bring Copilot, bring ChatGPT, bring Cursor, bring nothing if that's how you work fastest. The interview question doesn't change either way.

Read that policy and you might think Rippling has the most candidate-friendly setup in the industry. They sort of do. They also evaluate you on a different rubric depending on what you reached for, and they don't write that part on the calendar invite. The bar shifts under your feet based on whether you used the autocomplete.

That is not editorializing. Here's the line directly from interviewing.io's writeup, which is the closest thing the public has to an authoritative breakdown of how Rippling runs technical screens:

"AI use in Rippling interviews is optional, and you can use any tool you'd like, but they will use a different rubric to evaluate your performance depending on your choice."interviewing.io, Rippling Interview Process & Questions

That's the most honest interview policy of any company we've looked at. It's also the one that confuses candidates the most. "You can use whatever you want" is not the same sentence as "we have no preference." Rippling has very specific preferences. They just decline to write them down.

The 90-minute round nobody warns you about

The phone screen is what you'd expect. CodePair, one hour, LeetCode-flavored problems, optional AI. Pass that and you get a hiring manager screen, then an onsite of three to four hours. The onsite is where things get interesting.

The onsite has three coding-adjacent components. A one-hour LeetCode-style round with interconnected problems. A one-hour system design. And the one we want to talk about: a 90-minute build-and-discuss session that's, in our view, the cleanest signal Rippling sends about what they actually care about.

The structure of that 90 minutes:

  • For roughly the first hour, you build something. The interviewer is watching how you work, what you reach for, what you check before moving on. AI is fine. So is talking out loud, looking up docs, scrapping an approach halfway through.
  • Then for the last 30 or so minutes they make you defend it. What did you build. What would you do differently. What scales. What breaks. Why you chose this data structure over that one. Why you split the function here. Why you trusted the AI's suggestion at line 42.

That defense half is the part that selects for engineers and filters out coders. It's also the part AI can't help you with, because the AI didn't make the decisions and can't tell you why each one was right.

The Blind post that explains the policy better than the policy does

There's a thread on Blind from a candidate who used Copilot during their phone screen, finished the problems, and was rejected. The interviewer's feedback, paraphrased: you shouldn't have used that much AI. The candidate posted asking, fairly, why AI is "allowed" if it's then judged.

That post should be required reading for anyone interviewing at Rippling. The candidate isn't wrong. The rejection isn't wrong either. Both things are true at once.

Here's what's actually going on. Rippling does allow AI. They mean it. They also expect you to know when to use it, and reaching for autocomplete on every line of a 30-line phone-screen problem is a signal to the interviewer that you can't think through a small problem on your own. The tool wasn't the issue. The judgment was.

Hello Interview cites a Rippling candidate who got similar feedback through a different failure path:

"Relied too heavily on AI even though their initial approach was correct."Reported Rippling interviewer feedback, via Hello Interview

Read that twice. The candidate's approach was right. They still failed, because they handed implementation to the assistant when they could have driven it themselves, and then couldn't, after the fact, defend the choices the AI made on their behalf. That's the canonical Rippling reject shape, and it shows up in candidate writeups across Blind, Glassdoor, and Taro.

What Rippling is actually grading

Strip away the policy ambiguity and the rubric is straightforward. Rippling wants to know four things, in roughly this order of importance.

Whether you can solve the problem on your own. Not from scratch in pencil-and-paper terms, but at the level of "do you know what you're trying to build, and can you sketch the algorithm before you type." If your answer to a Rippling phone-screen problem starts with prompting an AI for the approach, that's a fail signal regardless of whether the answer compiles.

Whether you can read code, including AI-generated code. Did you actually look at what the assistant produced, or did you paste and pray. The interviewer is watching for this in real time. A candidate who scrolls past a 20-line generated block without reading it and moves on to the next request is leaving evidence on the screen.

Verification. Did you run the code. Did you write a test. When the input changed, did you check the existing solution still worked, or did you just regenerate. Rippling shares this preference with Meta and, frankly, every reasonable company shipping AI-enabled interviews. Same rubric line: test before trusting.

The defense round. Can you explain every choice in the code, including the ones the AI made for you. If "I'm not sure why it's structured that way, the AI suggested it" comes out of your mouth, you've lost the round. No recovering from that sentence.

How to prepare without falling into the trap

Sketch the approach in plain text before you touch the keyboard or the AI. Two minutes of "what data structure, what loops, what edge cases" written down on a scratch buffer somewhere. Then implement. The implementation is fine to do with AI assistance. The decision is yours.

When the AI generates code, read every line and ask yourself: do I know why this is here, and would I be able to defend it in 30 minutes when the interviewer asks. If the answer is no on either count, scrap it and write something simpler that you understand.

Build the verification reflex until it survives time pressure. After every block of generated or hand-written code, your default move is to run something that distinguishes "this works" from "this looks like it works." Don't outsource testing to the AI. Rippling is watching exactly this, and the candidates who don't have the habit are visible from minute three.

The piece of advice we don't see often enough: practice on multi-file problems, not single functions. Rippling's 90-minute round is "build a thing in a small codebase," not "implement is_palindrome." LeetCode prep is necessary for the other onsite round. It's almost useless for the build-and-discuss one. They're different muscles, and overdoing one doesn't crash-train the other.

This is, not coincidentally, what Nyrion is built for. Each problem is a multi-stage repo, the agent's posture is execution-only with zero novel ideas, and tests gate progression so you can't fake your way past a stage. Filter the catalog by Rippling to see the problems we tag for this format.

What we think Rippling does next

Most companies experimenting with AI-enabled interviews are still figuring out their policy in public. Rippling has converged faster than most, and we expect them to keep going.

Our prediction: within twelve months Rippling tightens the rubric explicitly, and the "use any tool you want" framing gets a companion sentence telling candidates exactly what they're being graded on. Not because the current policy is wrong, but because the gap between "what we said" and "what we measure" is generating the confused candidate feedback you can read on Blind today. They'll close it.

The other prediction is that the 90-minute build-and-discuss round expands to two of the four onsite slots. The signal it produces is too clean to throw away on a single round, and the system-design slot has overlap with it that nobody really needs.

Frequently asked, briefly answered

Is AI really allowed in every round?

Yes, including the phone screen. The policy is consistent across stages. What changes is the rubric the interviewer applies, not the permission.

Should I use AI?

Use it where it makes you faster on things you already understand. Don't use it to make a decision you should be making yourself. If a problem is small enough that you can solve it without the assistant in the time given, do that.

Will I get a no-AI round at any point?

Not as a separate gate. Rippling's policy is consistent. Some interviewers may ask "let's see how you'd do this without it" mid-round. That's a signal they want to see your independent judgment. Read it as a request, not a trap.

Which AI model should I bring?

The one you've practiced with. Rippling doesn't specify. Cursor with Claude is what most engineers we've talked to default to, but bring whatever your daily-driver setup is and don't change it on interview day.

What's the failure mode I should watch out for?

Over-reliance. Multiple candidate reports describe being rejected for using AI even when their initial approach was correct. The fix isn't "use AI less." It's "use AI in places where it doesn't replace your own judgment." Sketch first, generate second, read everything, defend everything.

How does Rippling compare to Meta?

Meta's format is more structured. One 60-minute three-phase project, AI built into the editor, specific approved models. Rippling is looser. You bring your tools, the round is longer (90 minutes for the build-and-discuss), the second half is a defense conversation. Rippling tests judgment more directly. Meta tests output quality and verification more directly. Both are valid. Both are harder than the LeetCode rounds they partially replace.

Practice problems tagged for Rippling

We curate problems specifically for the build-and-discuss format. Multi-stage repos, real codebases, an agent in the editor that won't make decisions for you, and tests that gate progression. No login required to look at the catalog.

Sources