13 comments

  • strenholme 1 hour ago
    Shameless plug time:

    My own MaraDNS has been extensively audited now that we’re in the age of AI-assisted security audits.

    Not one single serious security bug has been found since 2023. [1]

    The only bugs auditers have been finding are things like “Deadwood, when fully recursive, will take longer than usual to release resources when getting this unusual packet” [2] or “This side utility included with MaraDNS, which hasn’t been able to be compiled since 2022, has a buffer overflow, but only if one’s $HOME is over 50 characters in length” [3]

    I’m actually really pleased just how secure MaraDNS is now that it’s getting real in depth security audits.

    [1] https://samboy.github.io/MaraDNS/webpage/security.html

    [2] https://github.com/samboy/MaraDNS/discussions/136

    [3] https://github.com/samboy/MaraDNS/pull/137

    • binaryturtle 30 minutes ago
      That's a bit shameless, indeed.

      dnsmasq has served me well for like an eternity in multiple setups for different use cases. As all software it has bugs. And once located those get fixed. Its author is also easy to communicate with.

      Why should I switch over to something way less proven? I'm quite sure your software also has bugs, many still not located. Maybe because it's less popular/ less well known nobody cares to hunt for those bugs? Which means even if the numbers of found bugs is less in your software at the moment, and it may look more audited for this reason, it may actually be way less secure.

      • daneel_w 15 minutes ago
        > Why should I switch over to something way less proven?

        Must they prove their software to you? They're offering an alternative, not bargaining for a deal.

      • rgkpz 20 minutes ago
        "All software has bugs" is the most meaningless statement ever. It is just used for bonding with fellow bug writers who sit at a virtual campfire and muse about inevitabilities.

        Demonstrably some software has fewer bugs, and its authors are often hated, especially if they are a lone author like Bernstein. Because it must not happen!

        Projects with useless churn and many bug reports are more popular because only activity matters, not quality.

  • washingupliquid 2 hours ago
    Maybe this is the kick in the ass Debian needs to upgrade the embarrassingly ancient dnsmasq in "stable" because while I can't think of any new features, the latest versions contain many non-CVE bug fixes.

    But I doubt it, they will lazily backport these patches to create some frankenstein one-off version and be done with it.

    Before anyone says "tHaT's wHaT sTaBlE iS fOr": they have literally shipped straight-up broken packages before, because fixing it would somehow make it not "stable". They would rather ship useless, broken code than something too new. It's crazy.

    • zrm 1 hour ago
      They're not going to put a newer version in stable. The way stable gets newer versions of things is that you get the newer version into testing and then every two years testing becomes stable and stable becomes oldstable, at which point the newer version from testing becomes the version in stable.

      The thing to complain about is if the version in testing is ancient.

      • koverstreet 1 hour ago
        No, that's exactly the thing to complain about.

        That whole model dates to before automated testing was even really a thing, and no one knew how to do QA; your QA was all the people willing to run your code and report bugs, and that took time. Not to mention, you think the C of today is bad? Have you looked at old C?

        And the disadvantage is that backporting is manual, resource intensive, and prone to error - and the projects that are the most heavily invested in that model are also the projects that are investing the least in writing tests and automated test infrastructure - because engineering time is a finite resource.

        On top of that, the backport model heavily discourages the kinds of refactorings and architectural cleanups that would address bugs systemically and encourage a whack-a-mole approach - because in the backport model, people want fixes they can backport. And then things just get worse and worse.

        We'd all be a lot better off if certain projects took some of the enthusiasm with which they throw outrageous engineering time at backports, and spent at least some of that on automated testing and converting to Rust.

        • zrm 1 hour ago
          > That whole model dates to before automated testing was even really a thing, and no one knew how to do QA; your QA was all the people willing to run your code and report bugs, and that took time.

          That's not what it's about.

          What it's about is, newer versions change things. A newer version of OpenSSH disables GSSAPI by default when an older version had it enabled. You don't want that as an automatic update because it will break in production for anyone who is actually using it. So instead the change goes into the testing release and the user discovers that in their test environment before rolling out the new release into production.

          > On top of that, the backport model heavily discourages the kinds of refactorings and architectural cleanups that would address bugs systemically and encourage a whack-a-mole approach - because in the backport model, people want fixes they can backport.

          They're not alternatives to each other. The stable release gets the backported patch, the next release gets the refactor.

          But that's also why you want the stable release. The refactor is a larger change, so if it breaks something you want to find it in test rather than production.

          • koverstreet 1 hour ago
            You're going to have to update production at some point, and delaying it to once every 2 years is just deferred maintenance. And you know what they say about that...

            So when you do update and get that GSSAPI change, it comes with two years worth of other updates - and tracking that down mixed in with everything else is going to be all kinds of fun.

            And if you're two years out of the loop and it turns out upstream broke something fundamental, and you're just now finding out about it while they've moved on and maybe continued with a redesign, that's also going to be a fun conversation.

            So if the backport model is expensive and error prone, and it exists to support something that maybe wasn't such a good idea in the first place... well, you may want something, but that doesn't make it smart.

            • zrm 40 minutes ago
              There are two different kinds of updates.

              One is security updates and bug fixes. These need to fix the problem with the smallest change to minimize the amount of possible breakage, because the code is already vulnerable/broken in production and needs to be updated right now. These are the updates stable gets.

              The other is changes and additions. They're both more likely to break things and less important to move into production the same day they become public.

              You don't have to wait until testing is released as stable to run it in your test environment. You can find out about the changes the next release will have immediately, in the test environment, and thereby have plenty of time to address any issues before those changes move into production.

              • koverstreet 20 minutes ago
                You definitely need different channels for high priority fixes and normal releases, stable and testing releases and all that.

                But two years is impractical and Debian gets a ton of friction over it. Web browsers and maybe one or two other packages are able to carve out exceptions, because those packages are big enough for the rules to bend and no one can argue with a straight face that Debian is going to somehow muster up the manpower to do backports right.

                But for everyone else who has to deal with Debian shipping ancient dependencies or upstream package maintainers who are expected to deal with bug reports from ancient versions is expected to just suck it up, because no one else is big enough and organized enough to say "hey, it's 2026, we have better ways and this has gotten nutty".

                Maybe the new influx of LLM discovered security vulnerabilities will start to change the conversation, I'm curious how it'll play out.

                • rlpb 0 minutes ago
                  > ...upstream package maintainers who are expected to deal with bug reports from ancient versions...

                  They are not expected to deal with this. This is the responsibility of the Debian package maintainer.

                  If you (as an upstream) licensed your software in a manner that allows Debian to do what it does, and they do this to serve the users who actually want that, you are wrong to then complain about it.

                  If you don't want this, don't license your software like that, and Debian and their users will use some other software instead.

            • dagenix 44 minutes ago
              If you don't like the debian model, didn't use debian. There are people that like the debian model, it seems like you aren't one of them, though. That doesn't make them wrong.
            • zie 31 minutes ago
              Clearly you disagree with the debian stable perspective. That's fine, it's not for everyone. You can just run debian unstable or debian testing, depending on where exactly you draw the line.

              If you want the rolling release like distro, just run debian unstable. That's what you get. It's on par with all the other constantly updated distros out there. Or just run one of those.

              Also, Debian stable has a lifetime a lot longer than 2 years, see https://www.debian.org/releases/. Some of us need distros like stable, because we are in giant orgs that are overworked and have long release cycles. Our users want stuff to "just work" and stable promises if X worked at release, it will keep working until we stop support. You don't add new features to a stable release.

              From a personal perspective: Debian Stable is for your grandparents or young children. You install Stable, turn on auto-update and every 5-ish years you spend a day upgrading them to the next stable release. Then you spend a week or two helping them through all the new changes and then you have minimal support calls from them for 5-ish years. If you handed them a rolling release or Debian unstable, you'd have constant support calls.

          • washingupliquid 34 minutes ago
            > What it's about is, newer versions change things. A newer version of OpenSSH disables GSSAPI by default when an older version had it enabled.

            Debian patches defaults in OpenSSH code so it behaves differently than upstream.

            They shouldn't legally be allowed to call it OpenSSH, let alone lecture people about it.

            Let them call their fork DebSSH, like they have to do with "IceWeasel" and all the other nonsense they mire themselves into.

            When you break software to the point you change how it behaves you shouldn't be allowed to use the same name.

        • rlpb 4 minutes ago
          Refactoring and rewrites prove time and time again that they also introduce new bugs and changes in behaviour that users of stable releases do not want.

          For what you want, there are other distributions for that. Debian also has stable-backports that does what you want.

          No need to rage on distributions that also provide exactly what their users want.

        • jeroenhd 1 hour ago
          If you want that, you don't want Debian. Other people do.

          Some people will even run Debian on the desktop. I would never, but some people get real upset when anything changes.

          Debian does regularly bring newer versions of software: they release about every two years. If you want the latest and greatest Debian experience, upgrade Debian on week one.

          From your description, you seem to want Arch but made by Debian?

          • jampekka 44 minutes ago
            > From your description, you seem to want Arch but made by Debian?

            Isn't that essentially Debian unstable (with potentially experimental enabled)? I've been running Debian unstable on my desktops for something like 20 years.

          • koverstreet 1 hour ago
            Well, my workstation runs Debian sid, and all the newer stuff runs NixOS...

            But that does nothing for people who write and support code Debian wants to ship - packaging code badly can create a real mess for upstream.

        • bluGill 18 minutes ago
          You have far too much faith in automated testing.

          Don't get me wrong, I use and encourage extensive automated testing. However only extensive manual testing by people looking for things that are "weird" can really find all bugs. (though it remains to be seen what AI can do - I'm not holding my breath)

      • wolttam 1 hour ago
        Looks like the version in stable is 2.91, which was released within a couple months of trixie. It's not 'ancient' by any stretch.

        FWIW the fixes referenced here are already fixed in trixie: https://security-tracker.debian.org/tracker/source-package/d...

        • braiamp 1 hour ago
          Yeah was about to comment, parent says "if it is ancient", it is not. So the root comment is nothing burger. Stable has 1 release cycle old, and depending on how things play out, testing may have 2.93 or later anyways.
    • wolttam 1 hour ago
      I dunno, 2.92 seems to bring in some new features and changes that would not typically be brought into a stable release: https://thekelleys.org.uk/dnsmasq/CHANGELOG
    • lutoma 42 minutes ago
      For what it's worth, Debian had a security update for dnsmasq yesterday, presumably to address this.
    • afarviral 1 hour ago
      What if the new release which contains the fixes has new dependencies and those also have new dependencies? I assume they have to Frankenstein packages sometimes to maintain the borders of the target app while still having major vulns patched right in stable.
  • SoftTalker 5 minutes ago
    Never liked using dnsmasq. Always felt like too much in one tool. A local caching resolver, dhcp server, and tftp/pxe boot setup were always things I preferred to configure separately.
  • rela-12w987 45 minutes ago
    The AI bug report tsunami is not in all projects. As the top comment notes, MaraDNS didn't have any. I assume djbdns and tinydns didn't either, otherwise they'd shout it from the rooftops.

    I never understood why some projects get extremely popular and others don't. I also suspect by now that the reports by tools that are "too dangerous to release" scan all projects but selectively only contact those with issues, so that they never have to admit that their tool didn't find anything.

  • 882542F3884314B 2 hours ago
  • romaniitedomum 2 hours ago
    To quote a famous (in certain circles) bowl of petunias, "oh no, not again!"
    • antod 2 hours ago
      Are you saying this is Arthur Dent's fault? (again)
  • washingupliquid 2 hours ago
    It's a good thing this software isn't used in millions of devices which almost never receive updates.
    • amiga386 2 hours ago
      It's more of a good thing that, in most cases, it's on devices that won't send it any packets unless a client first authenticates to a Wi-Fi station or physically plugs into an Ethernet port.
  • xydac 1 hour ago
    some of these would have made to embedded hardwares, making updates more challenging if say you were to flash an update.
  • dist-epoch 2 hours ago
    How bad is it if someone infects my home router using such a thing? They can MITM non-encrypted requests, but there are not a lot of those, right?

    What else can they do, assuming the computers behind the router are all patched up.

    • zrm 1 hour ago
      They can block traffic to update servers so the computers behind the router aren't all patched up, then exploit them. They also get access to all the IoT devices on the internal network. They can also use your router as a proxy so their scraping/attack traffic comes from your IP address instead of theirs.

      It's definitely bad.

    • PhilipRoman 1 hour ago
      If you blindly TOFU ssh sessions, those can be pwned easily in many common use cases. Legacy software configurations like NFS with IP authentication will be bypassed. Realistically the most likely scenario is using your home as a VPN, or a DDOS node.
    • Asmod4n 50 minutes ago
      they could try and exploit any device on your network, and since they see which servers you connect to and how often you communicate with one they can write phishing mails which are tailored just for you.
    • nhattruongadm 1 hour ago
      [flagged]
  • mrbluecoat 1 hour ago
    > The tsunami of AI-generated bug reports shows no signs of stopping, so it is likely that this process will have to be repeated again soon.

    Welcome to the new world order.

  • ck2 1 hour ago
    if machine-learning can find all these holes

    why can't machine-learning write a product from scratch that is flawless?

    • yjftsjthsd-h 1 hour ago
      Who said it can't? https://news.ycombinator.com/item?id=47759709 appears to be a nearly flawless (per spec) zip implementation.
    • tclancy 1 hour ago
      Because the problem is asymmetric: the attacker only needs to find one hole at one time. The defender has to be flawless forever.
    • perlgeek 1 hour ago
      LLMs certainly make it more feasible to rewrite a product in a memory-safe language, eliminating a whole class of bugs.

      Flawless software is hard for an LLM to write, because all the programs they have been trained on are flawed as well.

      As a fun exercise, you could give a coding agent a hunk of non-trivial software (such as the Linux kernel, or postgresql, or whatever), and tell it over and over again: find a flaw in this, fix it. I'm pretty sure it won't ever tell you "now it's perfect" (and do this reproducibly).

    • _flux 1 hour ago
      Just because something is good at finding bugs, it may not find all the bugs. Finding a bug only tells you there was one bug you found, it doesn't tell if the rest is solid.
    • chromacity 1 hour ago
      If humans can find bugs, why can't humans write flawless code?

      Whatever the answer to that conundrum might be, LLMs are trained on these patterns and replicate them pretty faithfully.

    • hnlmorg 1 hour ago
      It’s easier to break something than it is to make something that cannot be broken.
    • jonhohle 1 hour ago
      Have you ever met a security engineer? I’ve never met one who was also a good engineer (not saying they don’t exist, I just haven’t met one). Do they find vulnerabilities? Sure. Could they write the tools they use to find vulnerabilities, most probably not.
    • duped 49 minutes ago
      You could argue the answer to this question depends on if you believe P=NP
  • tscburak 9 minutes ago
    [flagged]
  • cedum 1 hour ago
    [dead]