8 comments

  • simonw 32 minutes ago
    Suggestion: publish a live demo of the "needle playground". It's small enough that it should be pretty cheap to run this on a little VPS somewhere!
    • quantumleaper 19 minutes ago
      Should be quick and easy with WebGPU, too.
      • ilaksh 8 minutes ago
        Good idea. Could you make that.
    • HenryNdubuaku 26 minutes ago
      thanks, yeah, the problem is just handling scale, we don't have the infra ready to go, but anyone can do that. Its easy for people to run on their laptops straight up. Will try the VPS route.
  • simonw 1 hour ago
    Looks like you need to open up access to https://huggingface.co/Cactus-Compute/datasets/needle-tokeni... - I get this error when trying to run the steps in your README:

    > Repository Not Found for url: http s://huggingface.co/api/datasets/Cactus-Compute/needle-tokenizer/revision/main.

  • murkt 16 minutes ago
    Can this be a Siri-like core? Set me a timer, tell me what’s the weather, etc. Here is transcribed text and available list of tools for the model to call, and voice the output.
  • ilaksh 50 minutes ago
    Hmm.. this might make it feasible to build something like a command line program where you can optionally just specify the arguments in natural language. Although I know people will object to including an extra 14 MB and the computation for "parsing" and it could be pretty bad if everyone started doing that.

    But it's really interesting to me that that may be possible now. You can include a fine-tuned model that understands how to use your program.

    E.g. `> toolcli what can you do` runs `toolcli --help summary`, `toolcli add tom to teamfutz group` = `toolcli --gadd teamfutz tom`

    • HenryNdubuaku 47 minutes ago
      So Needle is trained for INT4, what you see in the playground is INT4, only 14MB, same challenge though.
      • ilaksh 42 minutes ago
        Oh gotcha. Fixed my comment.
  • cmrdporcupine 34 minutes ago
    This is very cool I'm going to try to carve out some time to try building this into my MOO system ( https://codeberg.org/timbran/moor / https://timbran.org/moor.html ) as alternative command parser front end.
  • nhattruongadm 36 minutes ago
    [flagged]
  • abhijithbabu 1 hour ago
    [flagged]
  • ac29 17 minutes ago
    FYI, distilling Gemini is explicitly against the ToS:

    "You may not use the Services to develop models that compete with the Services (e.g., Gemini API or Google AI Studio). You also may not attempt to reverse engineer, extract or replicate any component of the Services, including the underlying data or models (e.g., parameter weights)."

    • ilaksh 10 minutes ago
      I think GLM 5.1 or Kimi 2.6 could substitute for this type of purpose.
    • vablings 15 minutes ago
      Oh no! They stole the model weights! Distillation "attacks" is such bullshit