It span across 9000 LOC, 63 new files, including a DSL parser and much more.
How would you go about reviewing a PR like this?
It span across 9000 LOC, 63 new files, including a DSL parser and much more.
How would you go about reviewing a PR like this?
110 comments
AI doesn't directly make this stuff worse, it accelerates a team's journey towards embracing engineering practices around the code being written by humans or LLMs.
For work? Close it and remind them that their AI velocity doesn't save the company time if it takes me many hours (or even days depending on the complexity of the 9k lines) to review something intended to be merged into an important service. Ask them to resubmit a smaller one and justify the complexity of things like a DSL if they wanted it included. If my boss forces me to review it, then I do so and start quietly applying for new jobs where my job isn't to spend 10x (or 100x) more time reviewing code than my coworkers did "writing" it.
Another equally correct approach (given the circumstances of the organisation) is to get a different AISlopBot to do the review for you, so that you spend as much time reviewing as the person who submitted the PR did coding.
As long as the response delay increases at least geometrically, there is a finite bound to the amount of work required to deal with a pull request that you will never merge.
It's absolutely soul crushing when you're motivated to do a good job, but have a few colleagues around you that have differing priorities, and aren't empowered to do the right thing, even when management agrees with you.
The context here is lots of vibe coded garbage thrown at the reviewer.
The conditional was: If my boss forces me to review it
> As maintainer of some open source projects, there are no circumstances in which...
...you would force yourself to do anything that you don't want to do. Your approach is absolutely correct for the organisational circumstances in which this might happen to you.
There are other organisational circumstances where being the squeaky wheel, even when it's the right thing to do for the business, will be the wrong thing for you personally. It's valuable to identify when you're standing in front of a steamroller, and get out of the way.
In this job market? And where pretty much every company seems to be following the top-down push for AI-driven "velocity"?
Roles that are more fungible, train drivers, factory workers, I can see the case from the worker's perspective, even if I think there are externalities.
But I can't even see it from a worker's perspective in roles such as software or sales, why would anyone good want to work in an environment where much worse workers are protected, compensation is more levelised etc?
I'm assuming this will boil down to some unspoken values differences but still thought I'd ask.
I don't know why one would want to maintain a system of 'look how high I can still jump after all these years, reward please'. Again, expectations: they rise faster than the rewards.
The adversarial framing with coworkers is confusing, discipline is a different matter from collective bargaining.
Of course keeping the union narrowly focused is an issue. Unions are a democracy after all
The "much worse workers" are the majority. That's why you see everyone complaining about technical interviews and such - those of us who crush the interviews and get the jobs don't mind.
If AI slop infiltrates projects enterprises are built upon, its likely companies and their customers are metaphorically hurt too, because of a spike in outages etc... (which already happens given AWS got like 7000 outage reports after getting rid of another 14000 employees).
Yes AI can be cool, but can we stop being this blind regarding its limitations, usecases, how its actually used, how it actually benefits humanity, and so on? Like give me a valid reason for Sora existing (except for monetizing attentionspans of humans, which I consider highly unethical).
I can’t think of why the idea of unions is gaining popularity in some programmer circles, other than that its advocates simply don’t have economic common sense.
How?
Otherwise you need to be the person at the company who cuts through the bullshit and saves it from when the VibeCodeTechDebt is popping the industry.
Depends on the context. Is this from:
1. A colleague in your workplace. You go "Hey ____, That's kind of a big PR, I am not sure I can review this in a reasonable time frame can you split it up to more manageable pieces? PS: Do we really need a DSL for this?"
2. A new contributor to your open source project. You go "Hey ____, Thanks for your interest in helping us develop X. Unfortunately we don't have the resources to go over such a large PR. If you are still interested in helping please consider taking a swing at one of our existing issues that can be found here."
3. A contributor you already know. You go "Hey I can't review this ___, its just too long. Can we break it up to smaller parts?"
Regardless of the situation be honest, and point out you just can't review that long a PR.
The problem being that once someone has put together a PR, it’s often too late to go back to the serious thinking step and you end up having to massage the solution into something workable.
Refusing such a PR (which, again, is most of them) is easy. But it is also time consuming if you don’t want to be rude. Everything you point out as inadequate is a chance for them to rebut or “fix” in a way which is again unsatisfactory, which only leads to more frustration and wasted time. The solution is to be specific about the project’s goals but vague about the code. Explain why you feel the change doesn‘t align with what you want for the project, but don’t critique specific lines.
There are, of course, exceptions. Even when I refuse a PR, if it’s clear it was from a novice with good intentions and making an effort to learn, I’ll still explain the issues at length so they can improve. If it’s someone who obviously used an LLM, didn’t understand anything about what they did and called it a day, I’ll still be polite in my rejection but I’ll also block them.
Ginger Bill (creator of Odin) talked about PRs on a podcast a while back and I found myself agreeing in full.
https://www.youtube.com/watch?v=0mbrLxAT_QI&t=3359s
In life in general having the wherewithal to say no is a superpower. While I appreciate the concern about alienating newcomers, you don't start contributing to an existing project by adding 9k lines of the features you care about. I have not run any open source projects that accept external contributions, but my understanding in general is that you need to demonstrate that you will stick around before being trusted with just adding large features. All code is technical debt, you can't just take on every drive by pull request in hopes they will come back to fix it when it brakes a year down the line.
If you want to be really nice, you can even give them help in breaking up their PR.
Edit: left out that the user got flamed by non contributors for their apparently AI generated PR and description (rude), in defense of which they did say they were using several AI tools to drive the work. :
We have a performance working group which is the venue for discussing perf based work. Some of your ideas have come up in that venue, please go make issues there to discuss your ideas
my 2 cents on AI output: these tools are very useful, please wield them in such a way that it respects the time of the human who will be reading your output. This is the longest PR description I have ever read and it does not sound like a human wrote it, nor does it sound like a PR description. The PR also does multiple unrelated things in a single 1k line changeset, which is a nonstarter without prior discussion.
I don't doubt your intention is pure, ty for wanting to contribute.
There are norms in open source which are hard to learn from the outside, idk how to fix that, but your efforts here deviate far enough from them in what I assume is naivety that it looks like spam.
“AI Slop attacks on the curl project” https://youtu.be/6n2eDcRjSsk
I 100% know what you mean, and largely agree, but you should check out the guidelines, specifically:
> Don't be curmudgeonly. Thoughtful criticism is fine, but please don't be rigidly or generically negative.
And like, the problem _is_ *bad*. A fun, on-going issue at work is trying to coordinate with a QA team who believe chatgpt can write css selectors for HTML elements that are not yet written.
That same QA team deeply care about the spirit of their work, and are motivated by, the _very_ relatable sentiment of, you DONT FUCKING BREAK USER SPACE.
Yeah, in the unbridled, chaotic, raging plasma that is our zeitgeist at the moment, I'm lucky enough to have people dedicating a significant portion of their life to trying to do quality assurance in the idiomatic, industry best-standard way. Blame the FUD, not my team.
I would put to you that the observation that they do not (yet) grok what, for lack of a more specific universally understood term we are calling, "AI" (or LLMs if you are Fancy. But of course none of these labels are quite right). People need time to observe, and learn. And people are busy with /* gestures around vaguely at everything /*.
So yes, we should acknowledge that long-winded trash PRs from AI are a new emergent problem, and yes, if we study the specific problem more closely we will almost certainly find ever more optimal approaches.
Writing off the issue as "stupidity" is mean. In both senses.
We collectively used the strategy of "we pretend we are naively stupid and dont talk directly about issues" in multiple areas ... and it failed every single time in all of them. It never solves the problem, it just invites to bad/lazy/whatever actors to play semantic manipulative games.
Much more so than before, I'll comfortably reject a PR that is hard to follow, for any reason, including size. IMHO, the biggest change that LLMs have brought to the table is that clean code and refactoring are no longer expensive, and should no longer be bargained for, neglected or given the lip service that they have received throughout most of my career. Test suites and documentation, too.
(Given the nature of working with LLMs, I also suspect that clean, idiomatic code is more important than ever, since LLMs have presumably been trained on that, but this is just a personal superstition, that is probably increasingly false, but also feels harmless)
The only time I think it is appropriate to land a large amount of code at once is if it is a single act of entirely brain dead refactoring, doing nothing new, such as renaming a single variable across an entire codebase, or moving/breaking/consolidating a single module or file. And there better be tests. Otherwise, get an LLM to break things up and make things easier for me to understand, for crying out loud: there are precious few reasons left not to make reviewing PRs as easy as possible.
So, I posit that the emotional reaction from certain audiences is still the largest, most exhausting difference.
Are you contending that LLMs produce clean code?
All the LLM hate here isn't observation, it's sour grapes. Complaining about slop and poor code quality outputs is confessing that you haven't taken the time to understand what is reasonable to ask for, aren't educating your junior engineers how to interact with LLMs.
Can it also be, that different people work in different areas and LLM's are not equally good in all areas?
You could also be asking for too much in one go, though that's becoming less and less of a problem as LLMs improve.
Yes, that is how this works. I'm talking about the case where you're providing a good query and getting poor results. Claiming that this can be solved by more LLM conversations and ultrathink is cope.
'Git gud' isn't much of a truism.
Those days are gone. Coding is cheap. The same LLMs that enable people to submit 9000 line PRs of chaos can be used to quickly turn them into more sensible work. If they genuinely can't do a better job, rejecting the PR is still the right response. Just push back.
No. We need spam filters for this stuff. If it isn't obvious to you yet, it will be soon. (Or else you're one of the spammers.)
Don’t just leave it there, that reflects badly on you and your project and pushes away good contributors. If the PR is inadequate, close it.
Possibly you reject it with "this seems more suitable for a fork than a contribution to the existing project". After all there's probably at least some reason they want all that complexity and you don't.
"review it like it wasn't AI generated" only applies if you can't tell, which wouldn't be relevant to the original question that assumes it was instantly recognisable as AI slop.
If you use AI and I can't tell you did, then you're using it effectively.
After pointing out 2-3 things, you can just say that the quality seems too low and to come back once it meets standards. Which can include PR size for good measure.
If the author can't explain what the code does, make an explicit standard that PR authors must be able to explain their code.
Sometimes I get really into a problem and just build. It results in very large PRs.
Marking the PR as a draft epic then breaking it down into a sequence smaller PRs makes it much easier to review. But you can solicit big picture critique there.
I’m also a huge fan of documentation, so each PR needs to be clear, describe the bigger picture, and link back to your epic.
Let's say it's the 9000 lines of code. I'm also not reviewing 900 lines, so it would need to be more than 10 PRs. The code needs to be broken down into useful components, that requires the author to think about design. In this case you'd probably have the DSL parser as a few PRs. If you do it like that it's easier for the reviewer to ask "Why are you doing a DSL?" I feel like in this case the author would struggle to justify the choice and be forced to reconsider their design.
It's not just chopping the existing 9000 lines into X number of bits. It's submitting PRs that makes sense as standalone patches. Submitting 9000 lines in one go tells me that you're a very junior developer and that you need guidance in terms of design and processes.
For open source I think it's fine to simply close the PR without any review and say: Break this down, if you want me to look at it. Then if a smaller PR comes in, it's easier to assess if you even want the code. But if you're the kind of person that don't think twice about submitting 9000 lines of code, I don't think you're capable of breaking down you patch into sensible sub-components.
If PR author can satisfy it - I'm fine with it.
At that point it is just malicious.
While it would be nice to ship this kind of thing in smaller iterative units, that doesn’t always make sense from a product perspective. Sometimes version 0 has bunch of requirements that are non-negotiable and simply need a lot of code to implement. Do you just ask for periodic reviews of the branch along the way?
- Chains of manageable, self-contained PRs each implementing a limited scope of functionality. “Manageable” in this context means at most a handful of commits, and probably no more than a few hundred lines of code (probably less than a hundred tbh).
- The main branch holds the latest version of the code, but that doesn’t mean it’s deployed to production as-is. Releases are regularly cut from stable points of this branch.
- The full “product” or feature is disabled by a false-by-default flag until it’s ready for production.
- Enablement in production is performed in small batches, rolling back to disabled if anything breaks.
Are you hiding them from CIA or Al-Qaeda?
Feature toggles, or just plain Boolean flag are not rocket science.
If it's a newcomer to the project, a large self contained review is more likely to contain malware than benefits. View with suspicion.
The PR will be huge, plus AI is great at adding tons of shallow tests.
I see tests as little pins that hold your codebase down. They can be great for overall stability, but too many and your project becomes inflexible and brittle.
In this case you’d be nailing a bunch of code that you don’t want to the code base.
That alone should be the reason to block it. But LLM generated code is not protected by law, and by extension you can damage your code base.
My company does not allow LLM generated code into anything that is their IP. Generic stuff outside of IP is fine, but every piece has to flagged that it is created by an LLM.
In short, these are just the next evolution of low quality PRs.
Accepting code into the project when only one person (the author) knows what it does is a very bad idea. That's why reviews exist. Accepting code that zero persons know what it does is sheer screaming insanity.
But for any open source or enterprise project? Hell no.
Having spent some time vibe coding over the weekend to try it out, I disagree. I understand every line of code the super-specific Android app I generated does, even if I don't have the Android dev experience to come up with the code from the top of my head. Laziness is as good a reason to vibe code as inexperience or incompetence.
I wouldn't throw LLM code at a project like this, though, especially not in a PR of this size.
that's the point though, if they can't do it, then you close the ticket and tell them to fork off.
Comment & Close PR, only engage in discussions on tickets or smaller, understandable PRs.
As other have said: if someone drive-by opens a huge PR, it's as likely to be malware as a beneficial implementation.
Above all, you aim to allow the contributor to be productive, you make it clear what constraints they need to operate under in order to use AI codegen effectively. You want to come across as trying to help them and need to take care not to appear obstructive or dismissive.
“This PR is really long and I’m having a hard time finding the energy to review it all. My brains gets full before I get to the end. Does it need to be this long?”
Force them to make a case for it. Then see how they respond. I’d say good answers could include:
- “I really trieeld to make it smaller, but I couldn’t think of a way, here’s why…”
- “Now that I think about it, 95% of this code could be pushed into a separate library.”
- “To be honest, I vibe coded this and I don’t understand all of it. When I try to make it smaller, I can’t find a way. Can we go through it together?”
AI is red herring in discussions like this. How the change was authored makes no difference here.
I wouldn't. I'd reject it. I'd reject it even if the author had lovingly crafted each line by hand. A change request is not "someone must check my work". It's a collaboration between an author and a reviewer. If the author is failing to bother respecting the reviewer's time then they don't deserve to get a review.
I'd ask for them to write their thought process, why they made the decisions they made, what the need for so many files and so many changes. I may ask for a videoconference to understand better, if it's a collegue from work.
By now hopefully you should know if their approach is valid or not really. If not sure yet, then I'd take a look at the code, specially at the parts they refer to most importantly in their answer to my previous questions. So not a detailed review, a more general approach, to decide if this is valid or not.
If it's a valid approach, then I guess I'd review it. If not, then give feedback as to how to make it valid, and why it isn't.
Not valid is very subjective. From "this is just garbage", to "this is a good approach, but we can implement this iteratively in separate PRs that will make my life easier", again, it depends on your and their position.
In your case, I'd just reject it and ensure repo merges require your approval.
A new language feature is released, you cannot apply it to old code, since that would make a big PR. You need to do super slowly over time and most old code will never see it.
A better static type checker, that finds some bugs for you, you cannot fix them as your PR would be too big, you instead would need to make a baseline and split it up endlessly.
In theory yes, maybe a bit safer to do it this way, but discouraging developers to make changes is bad IMO. Obviously depends on your usecase, if you develop software that is critical to people's literal life, then you'll move more carefully.
But I wager 99% of the software the world produces is some commerce software, where the only thing lost is money.
Good. Don't change code for the sake of shiny new things syndrome.
> A better static type checker, that finds some bugs for you, you cannot fix them as your PR would be too big,
Good. Report each bug separately, with a suggested fix, categorised by region of the code. Just because you ran the program, that doesn't mean you understand the code well enough to actually fix stuff: those bugs may be symptomatic of a deeper issue with the module they're part of. The last thing you need is to turn accidentally-correct code into subtly-wrong code.
If you do understand the code well enough, what's the harm in submitting each bugfix as a separate (independent) commit? It makes it easier for the reviewers to go "yup, yup, yup", rather than having to think "does this part affect that part?".
"Thanks for the effort, but my time and energy is limited and I can't practically review this much code, so I'm closing this PR. We are interested in performance improvements, so you are welcome to pick out your #1 best idea for performance improvement, discuss it with the maintainers via ..., and then (possibly) open a focused PR which implements that improvement only."
As for me, my position is: "My project is my house. You want to be a guest in my house, you follow my rules. I really like people and am usually happy to answer questions from people who are reasonably polite, to review and provide feedback on their PRs, and so on. But I won't be pressured to prioritize your GitHub issue or PR over my work, my family, my friends, my health, or my personal goals in life. If you try to force me, I'll block you and there will be no further interaction."
If you don't like that position, well, I understand your feelings.
There has to be a better reason than "your PR is too big" as it's likely just a symptom, also very much context sensitive. If it is a 5kLOC PR that adds a compiler backend for a new architecture then it probably deserves attention because of its significance.
But if it's obviously low quality code than my response would be that it is low quality code. Long story short, it's you (submitter) problem, not me (reviewer, BDFL) problem.
As a reviewer or as a submitter?
So to me, it's less about being ridiculous (and "ridiculous" is a fighting word) and more a simple "that's not how this team does things because we don't have the resources to work that way."
Mildly hurt feelings in the most likely worst case (no food for a viral overtop tweet). At best recruitment of someone with cultural fit.
My experience is “too large/complex” provides an opening for arguementivenes and/or drama.
“We don’t do it like this” does not so much. It is social, sufficient and not a matter of opinion (“too” is a matter of opinion).
10 lines of code = 10 issues.
500 lines of code = "looks fine."
Code reviews.
+153675, -87954 : I don't care. Just taking time to read it will take longer than bondo fix the related bugs.
Or just refuse to review and let the author take full responsibility in running and maintaining the thing, if that's possible. A PR is asking someone else to share responsibility in the thing.
The PR would then be split into small ones up to 400 lines long.
In truth, such a big PR is an indicator that either (a) the original code is a complete mess and needs reengineering or more likely (b) the PR is vibe coded and is making lots of very poor engineering decisions and goes in the bin.
We don’t use AI agents for coding. They’re not ready. Autocomplete is fine. Agents don’t reason like engineers, they make crap PRs.
Maybe the AI was too clever for its own good? Have AI coding assistants evolved from junior (naive but alright) to medior (overly complicated and complete)?
Even 1000 lines is pushing it, IMO. Tell them to split the PR up into more granular work if they want it merged.
1. Reject it on the grounds of being too large to meaningfully review. Whether they used AI or not, this is effectively asking them to start over in an iterative process where you review every version of the thing and get to keep complexity in check. You'll need the right power and/or standing for this to be a reasonable option. At many organisations, you'd get into trouble for it as "blocking progress". If the people that pay you don't value reliability or maintainability, and you couldn't convince them that they should, that's a tough one, but it is how it is.
2. Actually review it in good faith: Takes a ton of time for large, over engineered changes, but as the reviewer, it is usually your job to understand the code and take on responsibility for it. You could propose to help out by addressing any issues you find yourself rather than making them do it, they might like that. This feels like a compromise, but you could still be seen as the person "blocking progress", despite, from my perspective, biting the bullet here.
3. Accept it without understanding it. For this you could _test_ it and give feedback on the behaviour, but you'd ignore the architecture, maintainability etc. You could still collaboratively improve it after it goes live. I've seen this happen to big (non-AI generated) PRs a lot. It's not always a bad thing. It might not be good code, but it could well be good business regardless.
Now, however you resolve it, it seems like this won't be the last time you'll struggle to work with that person. Can, and do they want to, change? Do you want to change? If you can't answer either of these questions with a yes, you'll probably want to look for ways of not working with them going forward.
One suggestion that possibly is not covered is that you/we can document clearly how AI generated PRs will be handled, make it easy for contributors to discover it and if/when such PR shows up refer the documented section to save yourself time.
The you can say (and this is hard), this looks like it is vibe code and misses that first human pass we want to see in these situations (link), please review and afterwards feel free to (re)submit.
In my experience they'll go away. Or they come back with something that isn't cleaned up and you point out just one thing. Or sometimes! they actually come back with the right thing.
The question itself doesn't matter. Just ask something. If their answer is genuine and making sense you deal with it like a normal PR. If their answer is LLM-generated too then block.
Personally I think it's difficult to address these kinds of PR's but I also think that git is terrible at providing solutions to this problem.
The concept of stacked PR's are fine up to the point where you need to make changes throughout all yours branches, then it becomes a mess. If you (like me) might have a tendency to rewrite your solution several times before ending up with the final result, then having to split this into several PR's does not help anyone. The first PR will likely be outdated the moment I begin working on the next.
Open source is also more difficult in this case because contrary to working for a company with a schedule, deadlines etc... you can't (well you shouldn't) rush a review when it's on your own time. As such PR's can sit for weeks or months without being addressed. When you eventually need to reply to comments about how, why etc.. you have forgotten most of it and needs to read the code yourself to re-claim the reasoning. At that time it might be easier to re-read a 9000 lines PR over time rather than reading 5-10 PR's with maybe meaningful descriptions and outcome where the implementation changes every time.
Also, if it's from a new contributor, I wouldn't accept such a PR, vibe coded or not.
If an engineer really cared, they would discuss these changes with you. Each new feature would be added incrementally and ensuring that it doesn’t break the rest of the system. This will allow you to understand their end goal while giving them an avenue to achieve it without disrupting your end goal.
If expectations have been shared and these changes contradict them, you can quickly close the PR, explain why it's not acceptable, and ask them to redo it.
If you don't have clear guidelines on AI usage or haven't shared your expectations, you'll need to review the PR more carefully. First, verify whether your assumption that it’s a simple service is accurate (although from your description, it sounds like it is). If it is, talk to the author and point out that it's more complicated than necessary. You can also ask if they used AI and warn them about the complexities it can introduce.
5 minutes, off the cuff.
In your case, 9000 LOC and 63 files isn't that crazy for a DSL. Does the DSL serve a purpose? Or is it just someone's feature fever dream to put your project on their resume?
That data point is waaaaaay more important than any other when considering if you should think about reviewing it or not.
I would ask them to break it up into smaller chunks.
How does one handle that with tact and not lose their minds?
If you don't have test coverage, or if the "refactor" is also changing behaviour, that project is probably dead. Make sure there's a copy of the codebase from before the new lead joined so there's a damage mitigation roll back option available.
The only exception is some large migration or version upgrade that required lots of files to change.
As far it goes for Vibe coded gigantic PRs It's a straight reject from me.
If I can write 8 9k line PRs everyday and open them against open source projects, even closing them let alone engaging with them in good faith is an incredible time drain vs the time investment to create them.
The question is what was the original task that needed to be fixed? I doubt it required a custom DSL.
Issue a research task first to design the scope of the fix, what needs to be changed and how.
In my eyes, there really shouldn't be more than 2-3 "full" files worth of LOC for any given PR (which should aim to to address 1 task/bug each. If not, maybe 2-3 at most), and general wisdom is to aim to keep "full" files around 600 LOC each (For legacy code, this is obviously very flexible, if not infeasible. But it's a nice ideal to keep in mind).
An 1800-2000 LOC PR is already pushing what I'd want to review, but I've reviewed a few like that when laying scaffolding for a new feature. Most PR's are usually a few dozen lines in 4-5 files each, so it's far below that.
9000 just raises so many red flags. Do they know what problem they are solving? Can they explain their solution approach? Give general architectual structure to their implementation? And all that is before asking the actual PR concerns of performance, halo effects, stakeholders, etc.
I'll just assume good intent first of all. Second, 9000 LOC spanning 63 lines is not necessarily an AI generated code. It could be a code mod. It could be a prolific coder. It could be a lot of codegen'd code.
Finally, the fact that someone is sending you 9000 LOC code hints that they find this OK, and this is an opportunity to align on your values. If you find it hard to review, tell them that I find it hard to review, I can't follow the narrative, its too risky, etc. etc.
Code review is almost ALWAYS an opportunity to have a conversation.
This doesn't help all the time. There are those people who still keep finding things they want you to change a week after they first reviewed the code. I try to avoid including them in the code review. The alternative is to talk to your manager about making some rules, like giving reviewers only a day or two to review new code. It's easy to argue for that because those late comments really hinder productivity.
State the PR is too large to be reviewed, and ask the author to break it down into self-contained units.
Also, ask which functional requirements the PR is addressing.
Ask for a PR walkthrough meeting to have the PR author explain in detail to an audience what they did and what they hope to achieve.
Establish max diff size for PRs to avoid this mess.
Was your project asking for all this? No? Reject.
Maybe getting their own time wasted will teach the submitter about the value of clarity and how it feels to be on the receiving end of a communication with highly asymmetric effort.
This should be convincing enough even for a non-technical team lead while for the initial PR it might be hard to explain objectively why it's bad.
That said, even with automated review, a 9000 line PR is still a hard reject. The real issue is that the submitter probably doesn't understand the code either. Ask them to walk you through it or break it down into smaller pieces. If they can't, that tells you everything.
The asymmetry is brutal though. Takes an hour to generate 9000 lines, takes days to review it properly. We need better tooling to handle this imbalance.
(Biased take: I'm building cubic.dev to help with this exact problem. Teams like n8n and Resend use it to catch issues automatically so reviewers can focus on what matters. But the human review is still essential.)
The greater danger is that AI can create or modify code into something that is disconnected, stubbed, and/or deceptive and claim it’s complete. This is much worse because it wastes much more time, but AI can fix this too, just like it can the slop- maybe not deterministically, but it can.
And because of this, those that get in the way of creating source with AI are just cavemen rejecting fire.
ask them to walk you through it
ask for design doc if appropriate
what is test plan who is responsible for prod delivery and support
(no difference from any other large pr)
- Large prs - vibe coding - development quality”
If you are lucky, they will also vibe fix it.
1. A more biological approach to programming - instead of reviewing every line of code in a self-contained way, the system would be viewed in a more holistic way, observing its behaviour and test whether it works for the inputs you care about. If it does, great, ship it, if not, fix it. This includes a greater openness to just throwing it away or massively rewriting it instead of tinkering with it. The "small, self-contained PRs" culture worked well when coding was harder and humans needed to retain knowledge about all of the details. This leads to the next point, which is
2. Smaller teams and less fungibility-oriented practices. Most software engineering practices are basically centred around making the bus factor higher, speeding onboarding up and decrease the volatility in programmers' practices. With LLM-assisted programming, this changes quite a bit, a smaller, more skilled team can more easily match the output of a larger, more sluggish one, due to the reduced communication overhead and being able to skip all the practices which slow the development velocity down in favour of doing things. A day ago, the good old Arthur Whitney-style C programming was posted to this site (https://news.ycombinator.com/item?id=45800777) and most commenters were horrified. Yes, it's definitely a mouthful on first read but this style of programming does have value - it's easier to overview, easier to modify than a 10KLOC interpreter spanning 150 separate files, and it's also quite token-efficient too. Personally, I'd add some comments but I see why this style is this way.
Same with style guides and whatnot - the value of having a code style guide (beyond basic stuff like whitespace formatting or wordwrapping on 160) drastically drops when you do not have to ask people to maintain the same part for years. You see this discussion playing out, "my code formatter destroyed my code and it made much more unreadable" - "don't despair, it was for the greater good for the sake of codebase consistency!". Again, way less of a concern when you can just tell an LLM to reformat/rename/add comments if you want it.
I'd definitely say that getting the architecture right is way more important, and let the details play out in an organic way, unless you're talking about safety-critical software. LLM-written code is "eventually correct", and that is a huge paradigm shift from "I write code and I expect the computer to do what I have written".
Congratulations, you discovered that generating code is only part of software development process. If you don't understand what the code is actually doing, good luck maintaining it. If it's never reviewed, how do you know these tests even test anything? Because they say "test passed"? I can write you a script that prints "test passed" a billion times - would you believe it is a billion unit tests? If you didn't review them, you don't have tests. You have a pile of code that looks like tests. And "it takes too long to review" is not an excuse - it's like saying "it's too hard to make a car, so I just took a cardboard box, wrote FERRARI on it and sit inside it making car noises". Fine, but it's not a car. It's just pretending. If it's not properly verified, what you have is not tests, it's just pretending.
As someone on the "senior" side AI has been very helpful in speeding up my work. As I work with many languages, many projects I haven't touched in months and while my code is relatively simple the underlying architecture is rather complex. So where I do use AI my prompts are very detailed. Often I spot mistakes that get corrected etc. With this I still see a big speedup (at least 2x,often more). The quality is almost the same.
However, I noticed many "team leads" try to use the AI as an excuse to push too difficult tasks onto "junior" people. The situation described by the OP is what happens sometimes.
Then when I go to the person and ask for some weird thing they are doing I get "I don't know, copilot told me"...
Many times I tried to gently steer such AI users towards using it as a learning tool. "Ask it to explain to you things you don't understand" "Ask questions about why something is written this way" and so on. Not once I saw it used like this.
But this is not everyone. Some people have this skill which lets them get a lot more out of pair programming and AI. I had a couple trainees in the current team 2 years ago that were great at this. This way as "pre-AI" in this company, but when I was asked to help them they were asking various questions and 6 months later they were hired on permanent basis. Contrast this with: - "so how should I change this code"? - You give them a fragment, they go put it in verbatim and come back via teams with a screenshot of an error message...
Basically expecting you will do the task for them. Not a single question. No increased ability to do it on their own.
This is how they try to use AI as well. And it's a huge time waster.
Also people with that mentality had been a waste of time before AI too.
Personally, I've felt drained dealing with small PRs fixing actual bugs by enthusiastic students new to projects in the pre-slop era.
Particularly if I felt they were doing it more to say they'd done it, rather than to help the project.
I imagine that motive might help drive an increase in this kind of thing.
[ Close with comment ]
A simple email could tell the difference.
As example, you have made summarization app. User is try to upload 1 TB file. What you do? Reject request.
You have made summarization app. User is try upload 1 byte file 1000 times. What you do? Reject request.
However, this is for accidental or misconfigured user. What if you have malicious user? There are many technique for this as well: hell-ban, tarpit, limp.
For hell-ban simply do not handle request. It appear to be handled but is not.
For tarpit, raise request maker difficulty. e.g. put Claude Code with Github MCP on case, give broad instructions to be very specific and request concise code and split etc. etc. then put subsequent PRs also into CC with Github MCP.
For limp, provide comment slow using machine.
Assuming you're not working with such person. If working with such person, email boss and request they be fired. For good of org, you must kill the demon.
I once had someone submit a patch (back in the SVN days), that was massive, and touched everything in my system. I applied it, and hundreds of bugs popped up.
I politely declined it, but the submitter got butthurt, anyway. He put a lot of work into it.
Unless you really trust them, it's up to the contributor to make their reasoning work for the target. Else, they are free to fork it if it's open source :).
I am a believer in using llm codegen as a ride along expert, but it definitely triggers my desire to over test software. I treat most codegen as the most junior coder had written it, and set up guardrails against as many things llm and I can come up with.
PRs should be under 1000 lines.
The alternative is to sit down with them and ask what they're trying to accomplish and solve the problem from that angle.
If the code sucks, reject it. If it doesn't, accept it.
This isn't hard.
(ctrl-v)
If they don't bother writing the code, why should you bother reading it? Use an LLM to review it, and eventually approve it. Then of course, wait for the customer to complain, and feed the complaint back to the LLM. /s
Large LLM generated PRs are not a solution. They just shift the problem to the next person in the chain.
And if they did in fact spend 6 months painstakingly building it, it wouldn't hurt to break it down into multiple PRs. There is just so much room for error reviewing such a giant PR.
Sometimes it doesn't split it among optimal boundaries, but it's usually good enough to help. There's probably room for improvement and extension (eg. re-splitting a branch containing many not-logical commits, moving changes between commits, merging commits, ...) – contributions welcome!
You can install it as a Claude Code plugin here: https://github.com/KevinWuWon/kww-claude-plugins (or just copy out the prompt from the repo into your agent of choice)