I love the website; the design, the video, the NSFW toggle, the simplicity.
I love the idea; definitely something I ran into a few times before and wish I had.
Unfortunately, I am not installing a closed-source daemon with access to the filesystem from an unknown (to me) developer. I will bookmark this and revisit in a few weeks and hope you had published the source. :)
I agree. It’s a neat idea and I’d be interested in seeing the details. A downloadable tarball is a lot better than nothing, but it still makes more work to evaluate a random project than I’m inclined to perform. It makes me assume the commit history is ugly in some way (being charitable and assuming the code itself isn’t). Hearing that it’s developed within a monorepo of unrelated projects and experiments isn’t inspiring either. Anyway, perhaps someone else will download the source and report back.
Edit: To be clear, I’m not saying any of those things are true, just that those are the first thoughts I have when someone says their source is open but makes it difficult to view. In this age in which it’s so trivial and commonplace to make source easily viewable.
I love to use the terminal, and I still do. But as much as I love to unfu*k my local nvim setup, I much rather pay a company to do it for me. Set up vim bindings inside jetbrains and everything comes with batteries included, along with a kick-ass debugger. While my colleagues are fighting opencode, I pointed my IDE at the correct MCP gateway and everything "just works" with more context.
Thought I'd share the data point to support jetbrains
I pay $179 per year for ALL their IDEs as an individual--$15 a month. Nothing compares to their C, Go, and rust IDEs. Chaining in rust make for very verbose statements, folding in jetbrains takes care of it. Took me an hour to get similar, yet still lacking, code folding in nvim with ufo. Sure, they aren't perfect. Working multi-language repos requires multiple resources hungry IDEs (but I typically just use nvim unless I am doing something very involved).
If I stop paying, I have the perpetual licence for the version at which I last paid so the "I WaNt tO owN mY SoFTWArE" crowd (which I am a part of) can choose to only pay when their current version starts lacking modern features. That's the reality of anything that you buy.
My experience is the software development landscape evolves so frequently that a yearly refresh of modern convenience features makes it a no brainer for a professional. I love to tinker, but between family and career, if I do happen to have a few hours to code I don't want to spend any of that time debugging my custom IDE. I used to, I respect those that do, but that's just not how I want to spend my time.
Plus, I am very happy to support developers and I encourage others to as well. Esp when the company isn't "branching out to sell user data, advertising."
What's wrong with their business model? Pay once, get a year of free updates and keep it forever. Runs locally. Want updates? Pay discounted renewal. Seems reasonable.
Their AI subscription OTOH needs work. Not worth it yet with how flaky it is.
FWIW, I dont think I use any software (unless it is single, narrow purpose) that is almost 1.5 decades old. Fun fact, docker was released in 2013, so your IDE likely doesn't even support build targets in containers.
But if you have the receipt, email the company and ask for help obtaining an old version. They are very willing to help customers, both current and previous.
Interesting. On https://account.jetbrains.com/licenses/assets I have a "Download" button and "Download a code for offline activation" and "Generate legacy license key" buttons.. I figured I could use one of those if I ever decide to cancel my sub, but I admit I have not tested the theory. It's possible your copy is indeed too old.
I think it only keeps history for user edited files, agent edited files don't seem to end up in it for me (Claude code) but maybe it works with other agents with the proper plugins I'm not sure.
This is so cool to have made yourself. How would you compare this to the functionality offered by jujutsu? I love the histogram, it was the first sort of thing I wanted out of jujutsu that its UI doesn't make very easy. But with jj the filesystem tracking is built in, which is a huge advantage.
I'm not a user, but I looked at the site and it looks like jj snapshots when you run a jj command. UNF snapshots continuously.
If an AI agent rewrites 30 files and you haven't touched jj yet, jj has the before-state but none of the intermediate states. UNF* captured every save as it happened, at filesystem level.
jj is a VCS. UNF is a safety net that sits below your VCS.
- UNF* works alongside git, jj, or no VCS at all
- No workflow change. You don't adopt a new tool, it just runs in the background
- Works on files outside any repo (configs, scratch dirs, notes) as it doesn't require git.
They're complementary, not competing.
W.r.t. to the histogram, this is my fav feature of the app as well. Session segmentation (still definitely not perfect) creates selectable regions to make it easier, too. The algo is in the CLI as well for the Agent recap (rebuilding context) features.
To be fair, jujutsu has a watchman feature which uses inotify to create snapshots on file change as well. Your tool probably has a more tailored UX to handling these inter-commit changes though so there could still provide complementary value there.
This tool seems very promising and does solve a real issue I've certainly had quite a few times already where this would have been very useful.
I really like the website design and content-wise, as well as the detailed writeup here on HN. Certainly impressive work for an individual.
I've never used Time Machine but I do have kopia set up for frequent backups of important places, like the entire Documents folder. It does use a CAS as well and last time/size/hashes for determining what to save.
Coming from that I'm curious about a few aspects. Would this work well for larger folders/files? You mentioned deduplication, but does that happen on the file level or on chunks of the file, so that small changes don't store an entire new version of the file? Additionally, are the stored files compressed somehow, probably a fast compression algorithm? I think that could make it work for more than just source code.
Great project though, so far. I could see it becoming a lot more popular given an open source code base. Maybe a pricing model like the one Mac Mouse Fix uses would work, being open source and charging a fee so small it still reaches a large audience. That would likely be fair considering the developer time/agent cost of just a single unf'd problem as ROI.
In today's version of "LLM's allow a person to write thousands of lines of code to replace built-in Unix tools":
inotifywait -mr -e modify,create,delete /your/dir |
while read _; do
cd /your/dir && git add -A && git commit -m "auto-$(date +%s)" --allow-empty
done
There are +8 billion people on the planet, computing has beed around a while now and some REALLY smart people have published tools for computers. Ask yourself, "am I the first person to try to solve this problem?"
Odds are, one or more people have had this problem in the past and there's probably a nifty solution that does what you want.
OP Here, hard to attempt to read and respond to this in good faith.
I think it would be dishonest if I didn't share that your approach to discourse here isn't a productive way of asking what insights I'm bringing.
If that's your concern, I agree I can't claim that nothing exists to solve pieces of the puzzle in different ways. I did my research and was happy that I could get a domain that explained the struggle -- namely unfucked.ai/unfudged.io -- moreover I do feel there are many pieces and nuances to the experience which give pause to folks who create versioning tools.
I'm open to engaging if you have a question or comment that doesn't diminish my motives, assumes I must operate in your world view "problems can only be solved once", and discourages people to try new things and learn.
Look, I'm grateful that you stopped by and hope you'll recognize I'm doing my best to manage my own sadness that my children have to exist in a world where folks think this is how we should address strangers.
> assumes I must operate in your world view "problems can only be solved once"
I never claimed anyone else has to agree with this. That's why people are allowed different opinions.
Nobody ought to give a damn what I think, the only opinion that matters about you is your own.
But just like I won't ask you adopt my view, I also won't go around patting people on the back for TODO apps.
My opinion: people ought to spend more time contributing to solving genuine problems. The world needs more of that, and less "I built a TODO app" or "Here's my bespoke curl wrapper".
Then replace git with "rsync" or "borg. But I don't see how running "git init" in a directory you have "days of work" accumulated in is a sticking point.
Git is a convenient implementation detail.
The core loop of "watch a directory for changes, create a delta-only/patch-based snapshot" has been a solved few-liner in bash for a long time...
There are a huge number of people coming into agentic coding with no real background in software dev, no real understanding of git, and even devs with years of experience will readily reach for convenience and polish even when they could otherwise implement it themselves, see: Vercel's popularity.
It is comparatively unsophisticated, but I need it so infrequently that it has been good enough.
I do like the idea of maintaining a complete snapshot of all history.
This is a good application for virtual filesystems. The virtual fs would capture every write in order to maintain a complete edit history. As I understand it, Google's CitC system and Meta's EdenFS work this way.
Yep, I’ve needed something like this a few times. Even when trying to be careful to commit every step to a feature branch, I’ve still found myself asking for code fixes or updates in a single iteration and kicking myself when I didn’t just commit the damn thing. This will be a nice safety net.
I spent a bit of time being baffled nothing existed that does this. Then I realized that, until Agents, the velocity of changes wasn't as quick and errors were rare(er)
Thank you for pointing out a problem that I had (which I do!), solving with Time Machine and trying to make myself commit more requently - and for providing a solution! Looks very cool, too. If I close the terminal I started --watch in, will the watch continue?
Writing this, I wanted to ask if the desktop app includes the CLI, but there it says it on your website :-) Thanks for thinking ahead so far, but then picking us up here and now so we can easily follow along into an unf* future!
yes, it worked a lot so once you say watch it watches until you stop it, including through closing terminals, computer power off, etc. It should restart on reboot, but -- test it yourself and tell me if I'm wrong :)
> unf watch
# reboot
> unf list
it should say watching on your directory still, if it stays crashed or something else. ping me at support at v1.co
Just one human, two machines at my home can't replicate all configurations...
Alternative - version files and catalog those versions (most of the work, with "Unfucked", appears to be catalog management), building it on top of a Versioning File System.
E.g. NILFS logging file system, logs every block-change (realtime)
I have used fossil in a similar way, also local, and sqlite based. Admittedly you have to add files to it first but setting it running via cron was simple enough. Though it wasn't be ause I let an AI access all my stuff.
Or btrfs for that matter. I'm doing something similar with btrfs. Used zfs for a while, but the external repositories kept getting out of sync with the distribtion kernel, so system updates required manual intervention. That annoyed the heck out of me over time. Switched back to btrfs, which has been working fine for the last year. 10 or so years earlier I still had data corruption and bugs with btrfs.
There's a 3-second debounce. Don't hold me to that timeframe, that's the default now.
It doesn't read the file the instant the OS fires the event. It accumulates events and waits for 3 seconds of silence before reading. So if an editor does write-tmp → rename (atomic save), or a tool writes in chunks, we only read after the dust settles.
I accept there are cases if the editor crashes mid-state that you have a corrupted state but there was never a good state to save, so, arguably you'd just restore to what's on file and remove the corrupt partial write.
It's not bulletproof against a program that holds a file open and writes to it continuously for more than 3 seconds, but in practice that doesn't happen with text files by Agent tools or IDEs.
Some use cases are better served by a system-wide process, I agree, but when I think source code, I think VSCodium. It is about configuration and starting/stopping. I don't mind the browser based web UI, but I do mind having to babysit one more (albeit super useful) tool. I'd rather have it as a VSCodium extension that would AUTOMATICALLY start when I load a workspace, configure the watched directory from that workspace, and stop when I close the workspace. So instead of me spending my attention on babysitting UNF, through VSCodium, UNF would just follow me wherever I go with zero configuration needed.
Well, if I have 10 different projects across 10 different drives, then, yes, I would need to babysit it. Furthermore, I wouldn't want it run it 24/7, but only when the files are actually going to be changed.
Great idea, terrible name. Honestly this sort of stuff reinforces the idea tech types lack social skills and maturity. NB I'm fine with vulgarity in its place (UK Viz reader here), but potentially professional tools isn't that place. Edit: I notice the blueness seems to have been deprecated in naming.
+1 for the open source comments.
In your examples the framing of use cases against agent screw-ups is contemporary and well-chosen.
Best of luck with the project as you make it more useable.
This is not something I would ever use. The idea of giving a probabilistic model the permission to run commands with full access to my filesystem, and at the very least not reviewing and approving everything it does, is bonkers to me.
But I'm amused by the people asking for the source code. You trust a tool from a giant corporation with not only your local data, but with all your data on external services as well, yet trusting a single developer with a fraction of this is a concern? (:
I don’t think that’s as crazy as you do. Corporations are supposed to have checks and balances in place, safeguards, policies. Individuals might have none of these.
why did you make it so complicated? magit has a `magit-wip-mode` that just silently creates refs in git intermittently so you can just use the reflog to get things back.
From what I know (correct me) magit-wip-mode hooks into editor saves. UNF hooks into the filesystem.
magit-wip-mode is great if your only risk is your own edits in Emacs. UNF* exists because that's no longer the only risk; agents are rewriting codebases/docs and they don't use Emacs.
no, you don't 'have to do so manually'. all agents can run 'git commit' for you. if you end up with too many commits for your taste; squash on merge, or before push; `git reset --soft HEAD~3; git commit -m "Squashed 3 commits"`
Op here - grateful you gave it a look but want to clarify TM can’t be used for this use case.
UNF is one install command + unf watch to protect a repo on every file change, takes 30s.
Time Machine snapshots hourly, not on every change, so you can lose real work between snapshots. This may have changed or I missed something but I reviewed that app to see if it was possible.
And while tmutil exists, it wasn’t designed to be invoked mid-workflow by an agent. UNF* captures every write and is built to be part of the recovery loop
Edit: you did it more than once in this thread - the other case was https://news.ycombinator.com/item?id=47183957. Can you please stop posting like this? It's not what this site is for, and destroys what it is for.
Appreciate that perspective and assumed some folks would feel that way.
I am more interested in testing if folks have the problem and like the shape of the solution, before I try to decide on the model to sustain it. Open Source to me is saying -- "hey do you all want to help me build this?"
I'm not even at the point of knowing if it should exist, so why start asking people to help without that validation.
I work(ed) with OSS projects that have terrible times sustaining themselves and don't default to it bc of that trauma.
"Local history" is a very popular feature in the JetBrains IDEs (just search HN comments), and I remember similar tools appearing on HN several times in the past (for example https://news.ycombinator.com/item?id=29784238), so clearly there is demand for such functionality (or at least was in the past, when almost all code edits were manual).
I love the idea; definitely something I ran into a few times before and wish I had.
Unfortunately, I am not installing a closed-source daemon with access to the filesystem from an unknown (to me) developer. I will bookmark this and revisit in a few weeks and hope you had published the source. :)
I didn't open up the source for this as I have a mono-repo with several experiments (and websites).
Happy to open the source up and link it from the existing website.
I've started to have an Agent migrate it out, and will review it before calling it done. Watch https://github.com/cyrusradfar/homebrew-unf
Edit: You can download the current version now: https://github.com/cyrusradfar/homebrew-unf/archive/refs/tag...
I'd have to imagine that moving this out to its own repo with Claude Code would be trivial so I don't understand the resistance.
This is a great idea. I look forward to seeing a proper repo for it.
Edit: To be clear, I’m not saying any of those things are true, just that those are the first thoughts I have when someone says their source is open but makes it difficult to view. In this age in which it’s so trivial and commonplace to make source easily viewable.
it's just the homebrew cask and recipe.
This does not contain the source.
Thought I'd share the data point to support jetbrains
If I stop paying, I have the perpetual licence for the version at which I last paid so the "I WaNt tO owN mY SoFTWArE" crowd (which I am a part of) can choose to only pay when their current version starts lacking modern features. That's the reality of anything that you buy.
My experience is the software development landscape evolves so frequently that a yearly refresh of modern convenience features makes it a no brainer for a professional. I love to tinker, but between family and career, if I do happen to have a few hours to code I don't want to spend any of that time debugging my custom IDE. I used to, I respect those that do, but that's just not how I want to spend my time.
Plus, I am very happy to support developers and I encourage others to as well. Esp when the company isn't "branching out to sell user data, advertising."
But if you have the receipt, email the company and ask for help obtaining an old version. They are very willing to help customers, both current and previous.
https://www.jetbrains.com/idea/download/other/#releases-2012
If an AI agent rewrites 30 files and you haven't touched jj yet, jj has the before-state but none of the intermediate states. UNF* captured every save as it happened, at filesystem level.
jj is a VCS. UNF is a safety net that sits below your VCS.
They're complementary, not competing.W.r.t. to the histogram, this is my fav feature of the app as well. Session segmentation (still definitely not perfect) creates selectable regions to make it easier, too. The algo is in the CLI as well for the Agent recap (rebuilding context) features.
jj wouldn’t help with that as it would be gitignored.
> The tool skips binaries and respects `.gitignore` if one exists.
I really like the website design and content-wise, as well as the detailed writeup here on HN. Certainly impressive work for an individual.
I've never used Time Machine but I do have kopia set up for frequent backups of important places, like the entire Documents folder. It does use a CAS as well and last time/size/hashes for determining what to save.
Coming from that I'm curious about a few aspects. Would this work well for larger folders/files? You mentioned deduplication, but does that happen on the file level or on chunks of the file, so that small changes don't store an entire new version of the file? Additionally, are the stored files compressed somehow, probably a fast compression algorithm? I think that could make it work for more than just source code.
Great project though, so far. I could see it becoming a lot more popular given an open source code base. Maybe a pricing model like the one Mac Mouse Fix uses would work, being open source and charging a fee so small it still reaches a large audience. That would likely be fair considering the developer time/agent cost of just a single unf'd problem as ROI.
Odds are, one or more people have had this problem in the past and there's probably a nifty solution that does what you want.
I think it would be dishonest if I didn't share that your approach to discourse here isn't a productive way of asking what insights I'm bringing.
If that's your concern, I agree I can't claim that nothing exists to solve pieces of the puzzle in different ways. I did my research and was happy that I could get a domain that explained the struggle -- namely unfucked.ai/unfudged.io -- moreover I do feel there are many pieces and nuances to the experience which give pause to folks who create versioning tools.
I'm open to engaging if you have a question or comment that doesn't diminish my motives, assumes I must operate in your world view "problems can only be solved once", and discourages people to try new things and learn.
Look, I'm grateful that you stopped by and hope you'll recognize I'm doing my best to manage my own sadness that my children have to exist in a world where folks think this is how we should address strangers.
Nobody ought to give a damn what I think, the only opinion that matters about you is your own.
But just like I won't ask you adopt my view, I also won't go around patting people on the back for TODO apps.
My opinion: people ought to spend more time contributing to solving genuine problems. The world needs more of that, and less "I built a TODO app" or "Here's my bespoke curl wrapper".
Git is a convenient implementation detail.
The core loop of "watch a directory for changes, create a delta-only/patch-based snapshot" has been a solved few-liner in bash for a long time...
There are a huge number of people coming into agentic coding with no real background in software dev, no real understanding of git, and even devs with years of experience will readily reach for convenience and polish even when they could otherwise implement it themselves, see: Vercel's popularity.
or `git reset --soft main` and then deal with the commits
or have 2 .git directories. Just add to the git commit `--git-dir=.git-backups` or whatever you want to name it.
https://gavinray97.github.io/blog/llm-build-cheaper-than-sea...
My comment is not meant as a shallow dismissal of the authors work but rather what seems to be a growing, systemic issue
https://www.vim.org/scripts/script.php?script_id=89
It is comparatively unsophisticated, but I need it so infrequently that it has been good enough.
I do like the idea of maintaining a complete snapshot of all history.
This is a good application for virtual filesystems. The virtual fs would capture every write in order to maintain a complete edit history. As I understand it, Google's CitC system and Meta's EdenFS work this way.
https://cacm.acm.org/research/why-google-stores-billions-of-...
https://github.com/facebook/sapling/blob/main/eden/fs/docs/O...
I spent a bit of time being baffled nothing existed that does this. Then I realized that, until Agents, the velocity of changes wasn't as quick and errors were rare(er)
Writing this, I wanted to ask if the desktop app includes the CLI, but there it says it on your website :-) Thanks for thinking ahead so far, but then picking us up here and now so we can easily follow along into an unf* future!
Looking forward to try it.
Just one human, two machines at my home can't replicate all configurations...
Keep it up!
Alternative - version files and catalog those versions (most of the work, with "Unfucked", appears to be catalog management), building it on top of a Versioning File System.
E.g. NILFS logging file system, logs every block-change (realtime)
more:
- NILFS https://en.wikipedia.org/wiki/NILFS
- topic https://en.wikipedia.org/wiki/Versioning_file_system
It doesn't read the file the instant the OS fires the event. It accumulates events and waits for 3 seconds of silence before reading. So if an editor does write-tmp → rename (atomic save), or a tool writes in chunks, we only read after the dust settles.
I accept there are cases if the editor crashes mid-state that you have a corrupted state but there was never a good state to save, so, arguably you'd just restore to what's on file and remove the corrupt partial write.
It's not bulletproof against a program that holds a file open and writes to it continuously for more than 3 seconds, but in practice that doesn't happen with text files by Agent tools or IDEs.
Feel free to follow up for clarity.
Why not just use TRime Machine?
I could build an extension for the UI vs a Tauri app, and it could help you install the CLI if you don't have it. Would that meet your needs?
That said, the fidelity of OS-level daemon can't really be replicated from within an app process.
One install, one init, and then it just works. It shouldn't stop across restarts or crashes.
+1 for the open source comments.
In your examples the framing of use cases against agent screw-ups is contemporary and well-chosen.
Best of luck with the project as you make it more useable.
But I'm amused by the people asking for the source code. You trust a tool from a giant corporation with not only your local data, but with all your data on external services as well, yet trusting a single developer with a fraction of this is a concern? (:
From what I know (correct me) magit-wip-mode hooks into editor saves. UNF hooks into the filesystem.
magit-wip-mode is great if your only risk is your own edits in Emacs. UNF* exists because that's no longer the only risk; agents are rewriting codebases/docs and they don't use Emacs.
I do this but i certainly see the appeal of something better
UNF is one install command + unf watch to protect a repo on every file change, takes 30s.
Time Machine snapshots hourly, not on every change, so you can lose real work between snapshots. This may have changed or I missed something but I reviewed that app to see if it was possible.
And while tmutil exists, it wasn’t designed to be invoked mid-workflow by an agent. UNF* captures every write and is built to be part of the recovery loop
https://news.ycombinator.com/newsguidelines.html
Edit: you did it more than once in this thread - the other case was https://news.ycombinator.com/item?id=47183957. Can you please stop posting like this? It's not what this site is for, and destroys what it is for.
I am more interested in testing if folks have the problem and like the shape of the solution, before I try to decide on the model to sustain it. Open Source to me is saying -- "hey do you all want to help me build this?"
I'm not even at the point of knowing if it should exist, so why start asking people to help without that validation.
I work(ed) with OSS projects that have terrible times sustaining themselves and don't default to it bc of that trauma.
Thanks for stopping by.