I've really jumped into this since I watched Geoffrey's videos last week. I ended up creating my own version of this, and have been throwing it small projects so far.
I created a small claude skill, that helps create the "specs" for a new/existing project, it adds a /specs folder with a README, that acts as a lookup for topics/features about the app, technical approach and feature set. Once we've chatted it spawns off subagents to do research and present those findings in the specific spec. In terms of improvements there, I'd almost like a more opinionated back and forth between "pm type" agents, to help test ideas and implementation ideas.
I've got the planning and build loop setup in the claude devcontainer, which is somewhat fragile at the moment, but works for now.
In terms of chewing up context, I've noticed that depending on the size of the project the "IMPLEMENTATION_PLAN.md" can get pretty massive. If each agent run needs to parse that plan to figure out what to do next it feels like a lot of wasted parsing. I'm working on changing that implementation plan to be more granular so there is less to parse when figuring out what to do next.
Overall, it's been fun and has kept me really engaged the past week.
there’s some debate about whether this is in the spirit of the _original_ Ralph, because it keeps too much context history around. But in practice Claude Code compactions are so low-quality that it’s basically the same as clearing the history every few turns
I’ve had good luck giving it goals like “keep working until the integration test passes on GitHub CI” - that was my longest run, actually, it ran unattended for 24 hours before solving the bug
I've been working with the Ralphosophy? for iterative behavior in my workflow and it seems pretty promising for cutting out a few manual steps.
I still have a manual part which is breaking the design document down into multiple small gh issues after a review but I think that is fine for now.
Using codex exec, we start working on a github issue with a supplied design document, creating a PR on completion. Then we perform a review using a review skill madeup which is effectively just a "cite your sources" skill on the review along with Open Questions.
Then we iterate through open questions doing a minimum of 3 reviews (somewhat arbitrary but sometimes multiple reviews catch things). Then finally I have I have a step in for checking Sonarcloud, fixing them and pushing the changes. Realistically this step should be broken out into multiple iterations to avoid large context rot.
What I miss the most is output, seeing whats going on in either Codex or Claude in real time. I can output the last response but it just gets messy until I make something a bit more formal.
So it took the author 6 months and several 1-to-1s with the creator to get value from this. As in he literally spent more time promoting it than he did using it.
And it all ends with the grift of all grifts: promoting a crypto token in a nonchalant 'hey whats this??!!??' way...
I don’t think anyone serious would recommend it for serious production systems. I respect the Ralph technique as a fascinating learning exercise in understanding llm context windows and how to squeeze more performance (read: quality) from today’s models
Even if in the absolute the ceiling remains low, it’s interesting the degree to which good context engineering raises it
How is it a “fascinating learning exercise” when the intention is to run the model in a closed loop with zero transparency. Running a black box in a black box to learn? What signals are you even listening to to determine whether your context engineering is good or whether the quality has improved aside from a brief glimpse at the final product. So essentially every time I want to test a prompt I waste $100 on Claude and have it an entire project for me?
I’m all for AI and it’s evident that the future of AI is more transparency (MLOPs, tracing, mech interp, AI safety) not less.
You probably wouldn't use it for anything serious, but I've Ralphed a couple of personal tools: Mac menu bar apps mostly. It works reasonably well so long as you do the prep upfront and prepare a decent spec and plan. No idea of the code quality because I wouldn't know good swift code from a hole in the head, but the apps work and scratch the itch that motivated them.
I do not understand where this Ralph hype is coming from.
Back when Claude 4.0 came out and it began to become actually useful, I already tried something like this. Every time it was a complete and utter failure.
And this dream of "having Claude implement an entire project from start to finish without intervention" came crashing down with this realization: Coding assistants 100% need human guidance.
This is so poorly written. What is "Ralph"? What is its purpose? How does it work? A single sentence at the top would help. The writer imagines that the reader cares enough to have followed their entire journey, or to decode this enormously distended pile of words.
More generally, I've noticed that people who spend a lot of time interacting with LLMs sometimes develop a distinct brain-fried tone when they write or talk.
Please don't post shallow dismissals of other people's work (this is in the site guidelines: https://news.ycombinator.com/newsguidelines.html) and especially please don't cross into personal attack.
"develop a distinct brain-fried tone when they write or talk" - I find that using an LLM as a writing copilot seriously degrades the flow of short form content
I created a small claude skill, that helps create the "specs" for a new/existing project, it adds a /specs folder with a README, that acts as a lookup for topics/features about the app, technical approach and feature set. Once we've chatted it spawns off subagents to do research and present those findings in the specific spec. In terms of improvements there, I'd almost like a more opinionated back and forth between "pm type" agents, to help test ideas and implementation ideas.
I've got the planning and build loop setup in the claude devcontainer, which is somewhat fragile at the moment, but works for now.
In terms of chewing up context, I've noticed that depending on the size of the project the "IMPLEMENTATION_PLAN.md" can get pretty massive. If each agent run needs to parse that plan to figure out what to do next it feels like a lot of wasted parsing. I'm working on changing that implementation plan to be more granular so there is less to parse when figuring out what to do next.
Overall, it's been fun and has kept me really engaged the past week.
there’s some debate about whether this is in the spirit of the _original_ Ralph, because it keeps too much context history around. But in practice Claude Code compactions are so low-quality that it’s basically the same as clearing the history every few turns
I’ve had good luck giving it goals like “keep working until the integration test passes on GitHub CI” - that was my longest run, actually, it ran unattended for 24 hours before solving the bug
I still have a manual part which is breaking the design document down into multiple small gh issues after a review but I think that is fine for now.
Using codex exec, we start working on a github issue with a supplied design document, creating a PR on completion. Then we perform a review using a review skill madeup which is effectively just a "cite your sources" skill on the review along with Open Questions.
Then we iterate through open questions doing a minimum of 3 reviews (somewhat arbitrary but sometimes multiple reviews catch things). Then finally I have I have a step in for checking Sonarcloud, fixing them and pushing the changes. Realistically this step should be broken out into multiple iterations to avoid large context rot.
What I miss the most is output, seeing whats going on in either Codex or Claude in real time. I can output the last response but it just gets messy until I make something a bit more formal.
The key bit is right under that though. Ralph is literally just this:
And it all ends with the grift of all grifts: promoting a crypto token in a nonchalant 'hey whats this??!!??' way...
It's complete garbage, and since it runs in a loop, the amount of garbage multiplies over time.
Even if in the absolute the ceiling remains low, it’s interesting the degree to which good context engineering raises it
I’m all for AI and it’s evident that the future of AI is more transparency (MLOPs, tracing, mech interp, AI safety) not less.
And this dream of "having Claude implement an entire project from start to finish without intervention" came crashing down with this realization: Coding assistants 100% need human guidance.
More generally, I've noticed that people who spend a lot of time interacting with LLMs sometimes develop a distinct brain-fried tone when they write or talk.