32 comments

  • notfried 2 days ago
    I love the website; the design, the video, the NSFW toggle, the simplicity.

    I love the idea; definitely something I ran into a few times before and wish I had.

    Unfortunately, I am not installing a closed-source daemon with access to the filesystem from an unknown (to me) developer. I will bookmark this and revisit in a few weeks and hope you had published the source. :)

    • cyrusradfar 2 days ago
      Totally understandable.

      I didn't open up the source for this as I have a mono-repo with several experiments (and websites).

      Happy to open the source up and link it from the existing website.

      I've started to have an Agent migrate it out, and will review it before calling it done. Watch https://github.com/cyrusradfar/homebrew-unf

      Edit: You can download the current version now: https://github.com/cyrusradfar/homebrew-unf/archive/refs/tag...

      • OccamsMirror 1 day ago
        I have to agree with the previous user. I'm not brew installing a closed source daemon.

        I'd have to imagine that moving this out to its own repo with Claude Code would be trivial so I don't understand the resistance.

        This is a great idea. I look forward to seeing a proper repo for it.

        • ctmnt 1 day ago
          I agree. It’s a neat idea and I’d be interested in seeing the details. A downloadable tarball is a lot better than nothing, but it still makes more work to evaluate a random project than I’m inclined to perform. It makes me assume the commit history is ugly in some way (being charitable and assuming the code itself isn’t). Hearing that it’s developed within a monorepo of unrelated projects and experiments isn’t inspiring either. Anyway, perhaps someone else will download the source and report back.

          Edit: To be clear, I’m not saying any of those things are true, just that those are the first thoughts I have when someone says their source is open but makes it difficult to view. In this age in which it’s so trivial and commonplace to make source easily viewable.

          • rovr138 1 day ago
            I don't see the source in their tar archive.

            it's just the homebrew cask and recipe.

      • rovr138 1 day ago
        > Edit: You can download the current version now: https://github.com/cyrusradfar/homebrew-unf/archive/refs/tag...

        This does not contain the source.

    • popalchemist 2 days ago
      Agreed on all counts. It looks great! Just can't trust it unless it's transparent.
  • wazzaps 2 days ago
    FYI all Jetbrains IDEs include this, as long as they are open on the codebase. It's called "Local history".
    • its-kostya 2 days ago
      I love to use the terminal, and I still do. But as much as I love to unfu*k my local nvim setup, I much rather pay a company to do it for me. Set up vim bindings inside jetbrains and everything comes with batteries included, along with a kick-ass debugger. While my colleagues are fighting opencode, I pointed my IDE at the correct MCP gateway and everything "just works" with more context.

      Thought I'd share the data point to support jetbrains

      • nurettin 1 day ago
        On behalf of everyone who dislikes jetbrains business model, I would like to say: duly noted.
        • its-kostya 3 hours ago
          I pay $179 per year for ALL their IDEs as an individual--$15 a month. Nothing compares to their C, Go, and rust IDEs. Chaining in rust make for very verbose statements, folding in jetbrains takes care of it. Took me an hour to get similar, yet still lacking, code folding in nvim with ufo. Sure, they aren't perfect. Working multi-language repos requires multiple resources hungry IDEs (but I typically just use nvim unless I am doing something very involved).

          If I stop paying, I have the perpetual licence for the version at which I last paid so the "I WaNt tO owN mY SoFTWArE" crowd (which I am a part of) can choose to only pay when their current version starts lacking modern features. That's the reality of anything that you buy.

          My experience is the software development landscape evolves so frequently that a yearly refresh of modern convenience features makes it a no brainer for a professional. I love to tinker, but between family and career, if I do happen to have a few hours to code I don't want to spend any of that time debugging my custom IDE. I used to, I respect those that do, but that's just not how I want to spend my time.

          Plus, I am very happy to support developers and I encourage others to as well. Esp when the company isn't "branching out to sell user data, advertising."

        • 8n4vidtmkvmk 1 day ago
          What's wrong with their business model? Pay once, get a year of free updates and keep it forever. Runs locally. Want updates? Pay discounted renewal. Seems reasonable. Their AI subscription OTOH needs work. Not worth it yet with how flaky it is.
          • nurettin 1 day ago
            I paid once in 2012 and now I can't download it anymore. Even if I could, it probably wouldn't work.
            • its-kostya 3 hours ago
              FWIW, I dont think I use any software (unless it is single, narrow purpose) that is almost 1.5 decades old. Fun fact, docker was released in 2013, so your IDE likely doesn't even support build targets in containers.

              But if you have the receipt, email the company and ask for help obtaining an old version. They are very willing to help customers, both current and previous.

            • throw_away_623 14 hours ago
            • hdjrudni 1 day ago
              Interesting. On https://account.jetbrains.com/licenses/assets I have a "Download" button and "Download a code for offline activation" and "Generate legacy license key" buttons.. I figured I could use one of those if I ever decide to cancel my sub, but I admit I have not tested the theory. It's possible your copy is indeed too old.
    • gschrader 2 days ago
      I think it only keeps history for user edited files, agent edited files don't seem to end up in it for me (Claude code) but maybe it works with other agents with the proper plugins I'm not sure.
      • cyrusradfar 2 days ago
        +1 OP here, this is the problem I'm solving for. Agents use tools and may be in multiple places editing; therefore, you need to watch the file system.
    • heeen2 2 days ago
      vscode and its forks as well (for files it saves)
  • mpalmer 2 days ago
    This is so cool to have made yourself. How would you compare this to the functionality offered by jujutsu? I love the histogram, it was the first sort of thing I wanted out of jujutsu that its UI doesn't make very easy. But with jj the filesystem tracking is built in, which is a huge advantage.
    • cyrusradfar 2 days ago
      I'm not a user, but I looked at the site and it looks like jj snapshots when you run a jj command. UNF snapshots continuously.

      If an AI agent rewrites 30 files and you haven't touched jj yet, jj has the before-state but none of the intermediate states. UNF* captured every save as it happened, at filesystem level.

      jj is a VCS. UNF is a safety net that sits below your VCS.

        - UNF* works alongside git, jj, or no VCS at all
        
        - No workflow change. You don't adopt a new tool, it just runs in the background
        
        - Works on files outside any repo (configs, scratch dirs, notes) as it doesn't require git.
      
      They're complementary, not competing.

      W.r.t. to the histogram, this is my fav feature of the app as well. Session segmentation (still definitely not perfect) creates selectable regions to make it easier, too. The algo is in the CLI as well for the Agent recap (rebuilding context) features.

      • lexluthor38 2 days ago
        To be fair, jujutsu has a watchman feature which uses inotify to create snapshots on file change as well. Your tool probably has a more tailored UX to handling these inter-commit changes though so there could still provide complementary value there.
        • mpalmer 2 days ago
          Yes, I was thinking of the watchman integration. And I also really love the DSLs it gives you for selecting change sets and assembling log formats.
    • benoitg 1 day ago
      One of the uses cases on their website is the agent deleted my .env file.

      jj wouldn’t help with that as it would be gitignored.

      • JimDabell 1 day ago
        This tool doesn’t help with that either:

        > The tool skips binaries and respects `.gitignore` if one exists.

  • ghrl 1 day ago
    This tool seems very promising and does solve a real issue I've certainly had quite a few times already where this would have been very useful.

    I really like the website design and content-wise, as well as the detailed writeup here on HN. Certainly impressive work for an individual.

    I've never used Time Machine but I do have kopia set up for frequent backups of important places, like the entire Documents folder. It does use a CAS as well and last time/size/hashes for determining what to save.

    Coming from that I'm curious about a few aspects. Would this work well for larger folders/files? You mentioned deduplication, but does that happen on the file level or on chunks of the file, so that small changes don't store an entire new version of the file? Additionally, are the stored files compressed somehow, probably a fast compression algorithm? I think that could make it work for more than just source code.

    Great project though, so far. I could see it becoming a lot more popular given an open source code base. Maybe a pricing model like the one Mac Mouse Fix uses would work, being open source and charging a fee so small it still reaches a large audience. That would likely be fair considering the developer time/agent cost of just a single unf'd problem as ROI.

  • gavinray 1 day ago
    In today's version of "LLM's allow a person to write thousands of lines of code to replace built-in Unix tools":

      inotifywait -mr -e modify,create,delete /your/dir |
      while read _; do
        cd /your/dir && git add -A && git commit -m "auto-$(date +%s)" --allow-empty
      done
    
    There are +8 billion people on the planet, computing has beed around a while now and some REALLY smart people have published tools for computers. Ask yourself, "am I the first person to try to solve this problem?"

    Odds are, one or more people have had this problem in the past and there's probably a nifty solution that does what you want.

    • cyrusradfar 1 day ago
      OP Here, hard to attempt to read and respond to this in good faith.

      I think it would be dishonest if I didn't share that your approach to discourse here isn't a productive way of asking what insights I'm bringing.

      If that's your concern, I agree I can't claim that nothing exists to solve pieces of the puzzle in different ways. I did my research and was happy that I could get a domain that explained the struggle -- namely unfucked.ai/unfudged.io -- moreover I do feel there are many pieces and nuances to the experience which give pause to folks who create versioning tools.

      I'm open to engaging if you have a question or comment that doesn't diminish my motives, assumes I must operate in your world view "problems can only be solved once", and discourages people to try new things and learn.

      Look, I'm grateful that you stopped by and hope you'll recognize I'm doing my best to manage my own sadness that my children have to exist in a world where folks think this is how we should address strangers.

      • gavinray 1 day ago

          > assumes I must operate in your world view "problems can only be solved once"
        
        I never claimed anyone else has to agree with this. That's why people are allowed different opinions.

        Nobody ought to give a damn what I think, the only opinion that matters about you is your own.

        But just like I won't ask you adopt my view, I also won't go around patting people on the back for TODO apps.

        My opinion: people ought to spend more time contributing to solving genuine problems. The world needs more of that, and less "I built a TODO app" or "Here's my bespoke curl wrapper".

    • amadeuspagel 1 day ago
      Only works in a git directory, and one might want to use git only for manual version control and another tool for automatic.
      • gavinray 1 day ago
        Then replace git with "rsync" or "borg. But I don't see how running "git init" in a directory you have "days of work" accumulated in is a sticking point.

        Git is a convenient implementation detail.

        The core loop of "watch a directory for changes, create a delta-only/patch-based snapshot" has been a solved few-liner in bash for a long time...

        • virgildotcodes 1 day ago
          Something something Dropbox

          There are a huge number of people coming into agentic coding with no real background in software dev, no real understanding of git, and even devs with years of experience will readily reach for convenience and polish even when they could otherwise implement it themselves, see: Vercel's popularity.

      • rovr138 1 day ago
        Create a branch, squash the branch manually when you want and merge things.

        or `git reset --soft main` and then deal with the commits

        or have 2 .git directories. Just add to the git commit `--git-dir=.git-backups` or whatever you want to name it.

    • gavinray 1 day ago
      This is happening so frequently I just wrote a blog about it to vent my frustration:

      https://gavinray97.github.io/blog/llm-build-cheaper-than-sea...

      My comment is not meant as a shallow dismissal of the authors work but rather what seems to be a growing, systemic issue

  • oftenwrong 1 day ago
    I have used savevers.vim for many years as a way to recover old versions of files.

    https://www.vim.org/scripts/script.php?script_id=89

    It is comparatively unsophisticated, but I need it so infrequently that it has been good enough.

    I do like the idea of maintaining a complete snapshot of all history.

    This is a good application for virtual filesystems. The virtual fs would capture every write in order to maintain a complete edit history. As I understand it, Google's CitC system and Meta's EdenFS work this way.

    https://cacm.acm.org/research/why-google-stores-billions-of-...

    https://github.com/facebook/sapling/blob/main/eden/fs/docs/O...

  • mplanck 2 days ago
    Yep, I’ve needed something like this a few times. Even when trying to be careful to commit every step to a feature branch, I’ve still found myself asking for code fixes or updates in a single iteration and kicking myself when I didn’t just commit the damn thing. This will be a nice safety net.
    • cyrusradfar 2 days ago
      Thank you! That's great to hear.

      I spent a bit of time being baffled nothing existed that does this. Then I realized that, until Agents, the velocity of changes wasn't as quick and errors were rare(er)

      • datawars 2 days ago
        Thank you for pointing out a problem that I had (which I do!), solving with Time Machine and trying to make myself commit more requently - and for providing a solution! Looks very cool, too. If I close the terminal I started --watch in, will the watch continue?

        Writing this, I wanted to ask if the desktop app includes the CLI, but there it says it on your website :-) Thanks for thinking ahead so far, but then picking us up here and now so we can easily follow along into an unf* future!

        Looking forward to try it.

        • cyrusradfar 2 days ago
          yes, it worked a lot so once you say watch it watches until you stop it, including through closing terminals, computer power off, etc. It should restart on reboot, but -- test it yourself and tell me if I'm wrong :)

            > unf watch
          
            # reboot
            > unf list
          
          it should say watching on your directory still, if it stays crashed or something else. ping me at support at v1.co

          Just one human, two machines at my home can't replicate all configurations...

  • rusty-jules 2 days ago
    This is a real problem! Sounds like you and dura landed on similar solutions: https://github.com/tkellogg/dura

    Keep it up!

  • ncr100 2 days ago
    A useful idea!

    Alternative - version files and catalog those versions (most of the work, with "Unfucked", appears to be catalog management), building it on top of a Versioning File System.

    E.g. NILFS logging file system, logs every block-change (realtime)

    more:

    - NILFS https://en.wikipedia.org/wiki/NILFS

    - topic https://en.wikipedia.org/wiki/Versioning_file_system

  • rishabhaiover 2 days ago
    haha the NSFW toggle is crazy
    • cyrusradfar 2 days ago
      Ha, the only feedback I needed :) I spent far too much time on the Unicorn exploding properly...
  • ifh-hn 2 days ago
    I have used fossil in a similar way, also local, and sqlite based. Admittedly you have to add files to it first but setting it running via cron was simple enough. Though it wasn't be ause I let an AI access all my stuff.
  • dpe82 1 day ago
    ZFS snapshots can be used to similar effect, basically for free.
    • jona-f 1 day ago
      Or btrfs for that matter. I'm doing something similar with btrfs. Used zfs for a while, but the external repositories kept getting out of sync with the distribtion kernel, so system updates required manual intervention. That annoyed the heck out of me over time. Switched back to btrfs, which has been working fine for the last year. 10 or so years earlier I still had data corruption and bugs with btrfs.
  • dataflow 1 day ago
    Don't change-notification-based mechanisms suffer from potentially reading a half-written file? Or do you do something more clever?
    • cyrusradfar 1 day ago
      There's a 3-second debounce. Don't hold me to that timeframe, that's the default now.

      It doesn't read the file the instant the OS fires the event. It accumulates events and waits for 3 seconds of silence before reading. So if an editor does write-tmp → rename (atomic save), or a tool writes in chunks, we only read after the dust settles.

      I accept there are cases if the editor crashes mid-state that you have a corrupted state but there was never a good state to save, so, arguably you'd just restore to what's on file and remove the corrupt partial write.

      It's not bulletproof against a program that holds a file open and writes to it continuously for more than 3 seconds, but in practice that doesn't happen with text files by Agent tools or IDEs.

      Feel free to follow up for clarity.

      • dataflow 23 hours ago
        Thanks, that makes sense.
  • s0a 3 days ago
    this seems insanely useful and well thought out. kinda surprised something like it doesn’t already exist. def useful in the age of agents
  • teo_zero 1 day ago
    Excellent idea. Looking forward to trying it. Any way to install it without brew?
  • sourcegrift 1 day ago
    I use zfs 15-minute snapshot for this. Nixos+zfs makes this too easy
  • pasc1878 1 day ago
    If this is on Homebrew then it is on macOS.

    Why not just use TRime Machine?

    • otterley 1 day ago
      The article explains that.
  • atmanactive 2 days ago
    This would be great as aVSCode(ium) extension.
    • cyrusradfar 1 day ago
      OP here --

      I could build an extension for the UI vs a Tauri app, and it could help you install the CLI if you don't have it. Would that meet your needs?

      That said, the fidelity of OS-level daemon can't really be replicated from within an app process.

      • atmanactive 1 day ago
        Some use cases are better served by a system-wide process, I agree, but when I think source code, I think VSCodium. It is about configuration and starting/stopping. I don't mind the browser based web UI, but I do mind having to babysit one more (albeit super useful) tool. I'd rather have it as a VSCodium extension that would AUTOMATICALLY start when I load a workspace, configure the watched directory from that workspace, and stop when I close the workspace. So instead of me spending my attention on babysitting UNF, through VSCodium, UNF would just follow me wherever I go with zero configuration needed.
        • cyrusradfar 1 day ago
          You really shouldn't need to babysit UNF. It feels like git.

          One install, one init, and then it just works. It shouldn't stop across restarts or crashes.

          • atmanactive 1 day ago
            Well, if I have 10 different projects across 10 different drives, then, yes, I would need to babysit it. Furthermore, I wouldn't want it run it 24/7, but only when the files are actually going to be changed.
  • overcrowd8537 2 days ago
    love the idea of this, but echoing others... closed source daemon with access to all files is a 100% non-starter.
  • mellosouls 1 day ago
    Great idea, terrible name. Honestly this sort of stuff reinforces the idea tech types lack social skills and maturity. NB I'm fine with vulgarity in its place (UK Viz reader here), but potentially professional tools isn't that place. Edit: I notice the blueness seems to have been deprecated in naming.

    +1 for the open source comments.

    In your examples the framing of use cases against agent screw-ups is contemporary and well-chosen.

    Best of luck with the project as you make it more useable.

  • monster_truck 2 days ago
    Where is the source? I'm not going to rely on or trust anything this important to code I can't read.
  • alunchbox 2 days ago
    Just use Jujutsu
  • imiric 1 day ago
    This is not something I would ever use. The idea of giving a probabilistic model the permission to run commands with full access to my filesystem, and at the very least not reviewing and approving everything it does, is bonkers to me.

    But I'm amused by the people asking for the source code. You trust a tool from a giant corporation with not only your local data, but with all your data on external services as well, yet trusting a single developer with a fraction of this is a concern? (:

    • urbandw311er 1 day ago
      I don’t think that’s as crazy as you do. Corporations are supposed to have checks and balances in place, safeguards, policies. Individuals might have none of these.
  • williamstein 2 days ago
    Is this open source or source available?
  • bananapub 2 days ago
    why did you make it so complicated? magit has a `magit-wip-mode` that just silently creates refs in git intermittently so you can just use the reflog to get things back.
    • cyrusradfar 2 days ago
      This was designed for any file save.

      From what I know (correct me) magit-wip-mode hooks into editor saves. UNF hooks into the filesystem.

      magit-wip-mode is great if your only risk is your own edits in Emacs. UNF* exists because that's no longer the only risk; agents are rewriting codebases/docs and they don't use Emacs.

  • mrorigo 1 day ago
    Why not just fuckin commit!?
    • 8n4vidtmkvmk 1 day ago
      You end up with a lot of small dumb commits, and you have to do so manually between nearly every LLM interaction.

      I do this but i certainly see the appeal of something better

      • mrorigo 1 day ago
        no, you don't 'have to do so manually'. all agents can run 'git commit' for you. if you end up with too many commits for your taste; squash on merge, or before push; `git reset --soft HEAD~3; git commit -m "Squashed 3 commits"`
  • riteshyadav02 1 day ago
    [dead]
  • zack2722 3 days ago
    [dead]
  • schainks 2 days ago
    So this is Time Machine, but with extra steps? </s>
    • cyrusradfar 1 day ago
      Op here - grateful you gave it a look but want to clarify TM can’t be used for this use case.

      UNF is one install command + unf watch to protect a repo on every file change, takes 30s.

      Time Machine snapshots hourly, not on every change, so you can lose real work between snapshots. This may have changed or I missed something but I reviewed that app to see if it was possible.

      And while tmutil exists, it wasn’t designed to be invoked mid-workflow by an agent. UNF* captures every write and is built to be part of the recovery loop

  • OutOfHere 2 days ago
    [flagged]
  • OutOfHere 2 days ago
    [flagged]
    • cyrusradfar 2 days ago
      Appreciate that perspective and assumed some folks would feel that way.

      I am more interested in testing if folks have the problem and like the shape of the solution, before I try to decide on the model to sustain it. Open Source to me is saying -- "hey do you all want to help me build this?"

      I'm not even at the point of knowing if it should exist, so why start asking people to help without that validation.

      I work(ed) with OSS projects that have terrible times sustaining themselves and don't default to it bc of that trauma.

      Thanks for stopping by.