14 comments

  • simonw 56 minutes ago
    I recommend caution with this bit:

      --bind "$HOME/.claude" "$HOME/.claude"
    
    That directory has a bunch of of sensitive stuff in it, most notable the transcripts of all of your previous Claude Code sessions.

    You may want to take steps to avoid a malicious prompt injection stealing those, since they might contain sensitive data.

  • theden 1 hour ago
    Kinda funny that a lot of devs accepted that LLMs are basically doing RCE on their machines, but instead of halting from using `--dangerously-skip-permissions` or similar bad ideas, we're finding workarounds to convince ourselves it's not that bad
    • simonw 1 hour ago
      Because we've judged it to be worth it!

      YOLO mode is so much more useful that it feels like using a different product.

      If you understand the risks and how to limit the secrets and files available to the agent - API keys only to dedicated staging environments for example - they can be safe enough.

      • zahlman 1 hour ago
        Why not just demand agents that don't expose the dangerous tools in the first place? Like, have them directly provide functionality (and clearly consider what's secure, sanitize any paths in the tool use request, etc.) instead of punting to Bash?
        • TeMPOraL 48 minutes ago
          Because it's impossible for fundamental reasons, period. You can't "sanitize" inputs and outputs of a fully general-purpose tool, which an LLM is, any more than you can "sanitize" inputs and outputs of people - not in a perfect sense you seem to be expecting here. There is no grammar you can restrict LLMs to; for a system like this, the semantics are total and open-ended. It's what makes them work.

          It doesn't mean we can't try, but one has to understand the nature of the problem. Prompt injection isn't like SQL injection, it's like a phishing attack - you can largely defend against it, but never fully, and at some point the costs of extra protection outweigh the gain.

          • zahlman 10 minutes ago
            > There is no grammar you can restrict LLMs to; for a system like this, the semantics are total and open-ended. It's what makes them work.

            You're missing the point.

            An agent system consists of an LLM plus separate "agentive" software that can a) receive your input and forward it to the LLM; b) receive text output by the LLM in response to your prompt; c) ... do other stuff, all in a loop. The actual model can only ever output text.

            No matter what text the LLM outputs, it is the agent program that actually runs commands. The program is responsible for taking the output and interpreting it as a request to "use a tool" (typically, as I understand it, by noticing that the LLM's output is JSON following a schema, and extracting command arguments etc. from it).

            Prompt injection is a technique for getting the LLM to output text that is dangerous when interpreted by the agent system, for example, "tool use requests" that propose to run a malicious Bash command.

            You can clearly see where the threat occurs if you implement your own agent, or just study the theory of that implementation, as described in previous HN submissions like https://news.ycombinator.com/item?id=46545620 and https://news.ycombinator.com/item?id=45840088 .

        • simonw 1 hour ago
          Because if you give an agent Bash it can do anything they can be achieved by running commands in Bash, which is almost anything.
          • zahlman 7 minutes ago
            Yes. My proposal is to not give the agent Bash, because it is not required for the sorts of things you want it to be able to do. You can whitelist specific actions, like git commits and file writes within a specific directory. If the LLM proposes to read a URL, that doesn't require arbitrary code; it requires a system that can validate the URL, construct a `curl` etc. command itself, and pipe data to the LLM.
        • lilEndiansGame 22 minutes ago
          Because the OS already provides data security and redundancy features. Why reimplement?

          Use the original container, the OS user, chown, chmod, and run agents on copies of original data.

        • VTimofeenko 48 minutes ago
          Tools may become dangerous due to a combination of flags. `ln -sf /dev/null /my-file` will make that file empty (not really, but that's beside the point).
          • zahlman 3 minutes ago
            Yes. My proposal is that the part of the system that actually executes the command, instead of trying to parse the LLM's proposed command and validate/quote/escape/etc. it, should expose an API that only includes safe actions. The LLM says "I want to create a symbolic link from foo to bar" and the agent ensures that both ends of that are on the accept list and then writes the command itself. The LLM says "I want to run this cryptic Bash command" and the agent says "sorry, I have no idea what you mean, what's Bash?".
        • cindyllm 1 hour ago
          [dead]
      • pjm331 52 minutes ago
        I feel like you can get 80% of the benefits and none of the risks with just accept edits mode and some whitelisted bash commands for running tests, etc.
      • catlifeonmars 59 minutes ago
        Shouldn’t companies like Anthropic be on the hook for creating tools that default to running YOLO mode securely? Why is it up to 3rd parties to add safety to their products?
      • croes 41 minutes ago
        > Because we've judged it to be worth it!

        Famous last words

    • catlifeonmars 1 hour ago
      People really really want to juggle chainsaws, so have to keep coming up with thicker and thicker gloves.
  • meander_water 1 hour ago
    I recently created a throwaway API key for cloudflare and asked a cursor cloud agent to deploy some infra using it, but it responded with this:

    > I can’t take that token and run Cloudflare provisioning on your behalf, even if it’s “only” set as an env var (it’s still a secret credential and you’ve shared it in chat). Please revoke/rotate it immediately in Cloudflare.

    So clearly they've put some sort of prompt guard in place. I wonder how easy it would be to circumvent it.

  • coppsilgold 22 minutes ago
    Note that bubblewrap can't protect you from misconfiguration, a kernel exploit or if you expose sensitive protocols to the workload inside (eg. x11 or even Wayland without a security context). Generally, it will do a passable job in protecting you from an automated no-0day attack script.
  • typs 1 hour ago
    I wish I had the opposite of this. It’s a race trying to come up with new ways to have Cursor edit and set my env files past all their blocking techniques!
    • GrowingSideways 1 hour ago
      If you wouldn't upload keys to github, why would you trust them to cursor?
      • hahahahhaah 1 hour ago
        A local .env should be safe to put on your T shirt and walk down times square.

        Mysql user: test

        Password: mypass123

        Host: localhost

        ...

        • Imustaskforhelp 1 hour ago
          Create a symlink to .env from another file and ask cursor to refer it if name is the concern regarding cursor (I don't knowhow cursor does this stuff)
  • dangoodmanUT 1 hour ago
    I've been saying bubblewrap is an amazing solution for years (and sandbox-exec as a mac alternative). This is the only way i run agents on systems i care about
    • catlifeonmars 1 hour ago
      > run agents on systems i care about

      You must not care about those systems that much.

  • majorchord 48 minutes ago
    If you don't mind a suid program, "firejail --private" is a lot less to type and seems to work extremely similarly. By default it will delete anything created in the newly-empty home folder on exit, unless you instead use --private=somedir to save it there instead.
  • eyberg 46 minutes ago
  • catlifeonmars 1 hour ago
    May I suggest rm -f .env? Or chmod 0600 .env? You’re not running CC as your own user, right? …Right?

    Oh, never mind:

    > You want to run a binary that will execute under your account’s permissions

  • Nora23 1 hour ago
    Smart approach to AI agent security. The balance between convenience and protection is tricky.
  • gexla 1 hour ago
    I believe this is also what Claude Code uses for the sandbox option.
  • isodev 1 hour ago
    My way of preventing agents from accessing my .env files is not to use agents anywhere near files with secrets. Also, maybe people forget you’re not supposed to leave actual secrets lingering on your development system.
  • OutOfHere 1 hour ago
    The link you need is https://github.com/containers/bubblewrap

    Don't leave prod secrets in your dev env.

  • hahahahhaah 1 hour ago
    Had this same idea in my head. Glad someone done it. For me the motivation is not LLMs but to have something as convenient as docker without waiting for image builds. A fast docker for running a bunch of services locally where perfect isolation and imaging doesnt matter.
    • JCattheATM 1 hour ago
      So, Flatpak?

      Funny enough Bubblewrap is also what Flatpak uses.

      • Imustaskforhelp 1 hour ago
        I want to like flatpak but I am genuinely unable to understand the state of cli tools in flatpak or even how to develop it. It all seems very weird to build upon as compared to docker