If AI writes your code, why use Python?

(medium.com)

835 points | by indigodaddy 1 day ago

246 comments

  • bryanrasmussen 13 hours ago
    One obvious reason is Python's extreme readability, it has often been described as being as close to executable pseudo-code as one can get.

    If you're using an LLM to write code I think the rules would be

    1. Use a language you know really well so you can read it easily, and add to it as needed.

    2. Use a language that has a large training set so the LLM can be most efficient.

    3. Use a language that is easy to read.

    If your language has a small training set or you don't intend to do much addition or you don't really know any language that well or are restricted from using choice 1 for some reason, 2 and 3 move up, and python has a large training set and it is easy to read.

    • simonask 13 hours ago
      Python is locally readable. Reasoning about larger systems in Python is where things get really hard, because you have to describe how many small individually readable things interact with each other in a very limited vocabulary.
      • bazoom42 13 hours ago
        For larger systems you create your own modules and abstractions, so comprehensibility at higher level does not depend so much on the language.
        • sundarurfriend 12 hours ago
          The tools the language gives you to create those abstractions make a lot of difference, however.
          • mbreese 10 hours ago
            But every abstraction that an LLM has to write is a choice. Your way of writing Python may not match that choice. The next run of the agent might not choose the same way.

            Because the language gives you many different tools, an LLM generated codebase can get inconsistent and overly complicated quickly. The flexibility of Python is a downside when you’re having an LLM generate the code. If you’re working in an existing codebase, it’s great - those choices were already made and it can match your style.

            When an LLM has to derive its own style is when things can devolve into a jumbled mess.

            • andyferris 8 hours ago
              To me applying LLMs to a python (or similarly dynamic) code base where it’s currently spaghetti and monkey patched, it can miss things just like I can.

              But… I have to admit Opus 4.7 has been very pragmatic in detecting root causes and proposing sensible fixes to bugs in this situation (ie bugs encountered in production not compile time).

              It’s also fine at matching current styles and conventions (which is great if they are good styles and conventions).

              In terms of new code, rust would have been near impossible to write with such a high degree of non-local reasoning, so I’m assuming these bugs wouldn’t be present.

            • brookst 6 hours ago
              That’s why proper using LLMs on large python codebases establish coding standards docs and tests. Turning the LLM loose is chaos, but having clear arch and naming and other standards can get pretty consistent results.
          • rglullis 7 hours ago
            Name one of those abstractions that is missing in Python.
            • nrub 3 hours ago
              You joking?

              - strong typing - real concurrency (heaven forbid you want a background task without having to spool up an external message queue and worker) - immutability - limitations in error handling (sort of just typing really) - limitations in nullability (also typing) - memory layout is usually hidden or abstracted away - no actual private methods or classes

              That's far from a complete list, but maybe you're taking for granted the typical pythonic conventions that many practice. It requires a ton of work to design and architect python systems of any non-trivial size for maintainability and understanding. No language is perfect, but there are plenty of languages that make supporting complex systems easier than python.

              • jghn 2 hours ago
                > strong typing

                Python is a strongly typed language. Strong and Static typing aren't the same thing.

                • rglullis 59 minutes ago
                  You are obviously correct, but even if we let aside parent's confusion about strong/static typing, it's a weak argument.

                  Python does provide type annotations and extensive tooling to make static analysis, so this whole "missing abstractions to help with understanding" is simply false. You can even setup a python project to make annotations mandatory.

                  There are plenty of things to criticize about Python - performance, packaging and multiplatform distribution come to mind - but to think that it is missing the tools to help build and understand complex codebases is frankly absurd.

              • two_cents 2 hours ago
                you probably mean that python is not static typed.
              • rglullis 3 hours ago
                So, the abstractions are there, you just happen to think that the implementations are flawed or limited. This is not the same as claiming they are lacking.
        • flir 5 hours ago
          I find the class (C++-descended languages) as the primary abstraction much easier to reason about than the module (Python), and I'm not sure why. Could be familiarity of course, but I think it might be because a class has a more explicit contract with the outside world. It's more rigid.
        • Yokohiii 7 hours ago
          Abstractions are created to improve local comprehensibility. They introduce a cost for global comprehensibility.
      • bryanrasmussen 13 hours ago
        hmm, yeah given LLM's ability to churn out lots of code quickly and be overly verbose in that code that is a potential downside. That it could in a quick one time edit create so much intellectual overhead that Python might be the wrong language to understand what is going on.

        What language do you feel is easier to reason about in the large?

        • hiAndrewQuinn 13 hours ago
          Haskell would be my vote, and Rust too, actually, both because of their very strong type systems. The type system lets you very quickly figure out what something is before you figure out what something does, and it turns out that separating those two concerns as hard as those two languages do often results in doing the whole one-two punch faster.
          • jorvi 9 hours ago
            Bit of a nit, it isn't the strong typing that makes Rust great for LLMs, it's the very strict compiler.

            Plenty of languages have strong (enough) typing but their compilers happily let you or the LLM footgun yourself.

          • lukan 11 hours ago
            Haskell does not qualify for a large training set, though. (Nor for readability in my opinion)

            I think I have never seen haskell software made wih LLM's but well, aside from university, I have not seen Haskell code at all. (Also Haskell purists I would associate with people who avoid LLM's)

            I would rather go with Rust given these choices.

            But I have good results with typescript (or javascript for simpler things). Really large set of examples. Tools optimized for it, agents debugging in the browser works allmost out of the box. And well, a elaborate typesystem.

            • co_dh 6 hours ago
              I used Claude that created a terminal based table viewer from rust first, to lean , and finally to Haskell. https://github.com/co-dh/tv-hask/tree/main

              I give up rust because it’s not functional enough. There aren’t many things Claude can prove about a table viewer, and Haskell fits very well, and have enough libraries. Claude is pretty good at Haskell. I barely write Haskell before but I do know monad.

            • mightybyte 1 hour ago
              How much code do you think is necessary for LLMs to be good enough?
            • klodolph 8 hours ago
              I used Claude to generate Haskell and it works really well. Claude struggles sometimes with respecting abstraction boundaries, but Haskell enforces parts of those boundaries in its type system better than a lot of other languages (if a module can’t do IO, for example).

              Works well, in my experience. Sometimes the agent does weird stuff that you have to rewrite, but I get the sense that this happens in any language.

              Maybe Haskell’s training set is not large enough, but it seems to work despite the smaller training set.

            • yakshaving_jgt 10 hours ago
              [dead]
          • bonesss 6 hours ago
            In the window of Haskell-like and highly readable, I’d throw OCaml and F# out as strong candidates.

            In practice your code can be cleaner than Python, deeply flexible naming capabilities including full sentences with backticking, efficient and powerful discriminated unions and types enable near-English domains, the type system keeps you honest and provides exhaustiveness guarantees, domain modules of applied functions are obvious and locally coherent domain grammars, and there is potent DSL support to create mini-grammars for legibility and expressiveness.

            I used to write python by hand to reason then type it up in C#. F# is just as easy with a pen, but far more powerful and with a powerful type system and aggressive compiler. OCaml and F# are also highly token efficient languages, beating Python across the board for agentic work.

          • pmarreck 2 hours ago
            If you're going to go Haskell, why not go all the way to Lean 4 and get mathematical provability along with reasonable speed and type-safety?
          • hedora 7 hours ago
            I’d add perl (similar runtime semantics as python, but at least sigils give you some hint of developer intent. If you see &@%$$ck() in perl, you know you’re in for a ride).

            I’d also add, C, C++, Rust, Java, Swift, Typescript, Ruby, Lisp, Make, Awk and Sed.

            The only thing I’d rate a tie is Javascript.

          • Yokohiii 7 hours ago
            Wouldn't a LLM just produce a massive type gibberish long term?
            • freedomben 7 hours ago
              I've "written" a lot of rust via LLMs, and the rust tooling and features give a lot of useful guard rails to LLMs that produce pretty good code overall, certainly compared to the python I've seen it crank out. Clippy and fmt alone often cause the LLM to hit a snag and realize it's mistake and take a better approach. It's quite a powerful combo IME
          • gf000 7 hours ago
            There are many languages with similarly strong type systems - Scala, Kotlin, OCaml, etc (and nowadays, even Java). A GC may also be an advantage in that the LLMs may get it right in less tries.
        • harperlee 12 hours ago
          I'd say Java, because it has a massive footprint amenable for training, and a strong type system (does not have sum types though and those are trendy).

          You'd have to steer the LLM to use the style you want, and not massively overarchitect things though, but that's going to be an issue nonetheless.

          • mands 9 hours ago
            Java has sum types - they are fairly recent, called sealed records, and can be exhaustively pattern matched on.

            (I do agree however, Java is a great target for LLMs)

        • jimmaswell 12 hours ago
          C# is as close to an ideal language as you can get for most things IMO. I find AI does a great job with it.
          • HumblyTossed 7 hours ago
            I like C#, it's how I make a living, but it's way too large today. I can program in valid C# and it looks like C or I can program in C# and it looks like a functional language or I can program in C# and it's looks all angle-brakety like C++.

            The problem with that is everyone has an opinion on what good C# looks like.

            For personal projects, I'll take a much simpler language any day.

            • seabrookmx 4 hours ago
              While C# is a particularly egregious case, I think all reasonably long-lived, popular languages suffer from this problem. Go is being very intentional about not falling in this trap, but JavaScript, Python, Java.. modern/idiomatic code in all of these languages looks very different from the code you'd write using them 15 years ago.

              At my workplace, we use the .editorconfig and static analysis heavily to push us towards a consistent C# feature-set and style. This plays the same role that pyupgrade would in python, for instance.

            • bonesss 5 hours ago
              C# has recreated the C++ dialect conundrum. For some it’s effectively an idempotent functional language with unfortunate failings of exhaustiveness, for others it’s Java ca 2009, for others it’s C++ but not quite.

              Discipline, effort, linters, reviews, more discipline, more effort, retraining, discipline… and foot guns everywhere because so much of the adaptation has been a 95% solution. Personally I got everything C# promises even now when F# was dropped years ago and have found the interim pretty annoying.

          • pjerem 11 hours ago
            I do agree. C# is an hidden gem for IA. There are not that much different ways to get somewhere so the model have probably been trained on the framework and libraries everybody uses (the Microsoft ones).

            Compared to most languages, including Java, C# will have a hard time letting you compile incoherent code.

            You barely need any dependencies other than aspnetcore and efcore for most applications and your AI knows them well.

            It’s easy to do TDD with it so it’s easy to keep your IA from hallucinating.

            • kuboble 9 hours ago
              I definitely agree with the sentiment. However this part.

              > There are not that much different ways to get somewhere

              This is far from true. C# is a language where you can operate on the raw pointers through unsafe keyword. On the other end of the spectrum, you can have duck-typing in dynamic blocks.

              For operating on collections you can use old style loops, or chain of lambdas or sql like syntax.

              I have been coding in C# old school way for most of my life at this point, and I feel like I'm in a foreign land reading code from some other C# projects.

        • barkingcat 1 hour ago
          get LLM to write ADA and have it use SPARK for verification.
      • neuronexmachina 6 hours ago
        Although it's not part of core Python, tach is pretty handy for specifying and enforcing those larger-scale interactions: https://github.com/tach-org/tach
        • nrub 3 hours ago
          Yeah, that's cool, but it would be almost completely unnecessary if python just had actual private methods/classes/properties. It's a lot like pydantic, which is completely unnecessary if you had strong typing.
      • nostrademons 4 hours ago
        Locally readable is what I want for LLM-generated code, though. If I need to change the whole architecture, I re-prompt the LLM and have it rewrite the code for me. The changes that I'd need the code to be human-readable for are quick fixes where the LLM got something simple wrong and it'd take longer to explain to the LLM where it went off-track than to just fix it myself.
      • bootsabota 8 hours ago
        This is why good design documents will always be necessary.

        When I work with AI I always have it keep an up-to-date architectural document committed to the repository.

        Also, we need to be able to understand what is happening under the hood somewhat, so I very much agree the readability is crucial. And frankly, rust is not up there in the readability realm.

        I think all the previous language designs still hold for their respective use case. AI written or otherwise. Why? Because performance acceptability is domain specific, and also the algorithms complexity generally determines overall performance.

        For example, move the performance critical stuff into a Python C extension like Torch etc…

      • scared_together 12 hours ago
        I’m curious about the design space of languages & frameworks which are lower level than LLM prompts but higher level than Python, Ruby and Common Lisp.

        Do you have any recommendations for systems where reasoning about large systems is easier than in python?

        • skydhash 10 hours ago
          You have to go into live programming, code in a system, and saving images. Readability is no longer a factor, what you want is easy access to documentation, quick navigation, and a playground.
      • ant6n 13 hours ago
        That’s true. Once you have APIs and want to use classes to create larger structures, the language is full of warts.
        • cturner 10 hours ago
          I have built large systems on python that use classes, for more than ten years. I came to it from Java, ten years.

          As a rule, I avoid implementation inheritance. Occasionally I need to facade a library that assumes implementation inheritance to avoid it spreading into my codebase.

          When the codebase hits a certain size, I hand-roll some decorators to create functionality like java interfaces. With that done, and a suite of acceptance tests, I find it scales up well.

    • ashishb 12 hours ago
      Python is amazing for scripting.

      Python is terrible for writing big systems.

      Projects whose V1 is written in Go/Rust/C++ don't normally go out and re-write V2 in Python.

      The reverse is really common.

      Even many famous Python packages are now Python wrappers.

      https://ashishb.net/programming/python-in-production/

      • quietbritishjim 10 hours ago
        > Projects whose V1 is written in Go/Rust/C++ don't normally go out and re-write V2 in Python.

        That's because you would usually rewrite your Python program in something like C++ if you realise that it's too slow and you need the speed of a compiled language, despite the enormous extra complexity to create and maintain it that way.

        You wouldn't go back the other way because it's very rare to go to all that extra effort writing in a more efficient language only to realise that the slower performance of Python would've been adequate after all. And, thanks to sunk cost fallacy, even someone that does realise it is unlikely to make the switch back.

        There's no way you could convince me that writing your program in C++ is easier to code in, even for a very large system, than Python. C# maybe.

        > Even many famous Python packages are now Python wrappers.

        Of course! That's precisely because Python is much simpler to code in. If your Python libraries are wrappers around native code then you get the speed benefit without having to drop into those languages. (Plus they can release the GIL, allowing true multithreaded Python.)

        If native coding languages were good enough then there would be no need for Python wrappers - you'd just call into the native library directly.

        • boringg 8 hours ago
          That and how many developers who would write first round in Go/Rust/C++ would think it beneath them to write in Python :) The complaints alone wouldn't be worth it even if there was some suprising specific use case.
        • Tade0 6 hours ago
          > You wouldn't go back the other way because it's very rare to go to all that extra effort writing in a more efficient language only to realise that the slower performance of Python would've been adequate after all.

          It's UIs which are typically rewritten in more "fun" languages - occasionally because it becomes too much of a maintenance burden when all one wants to do is move around some form controls.

      • marliechiller 11 hours ago
        I dont know if the reasoning for a rewrite is purely for maintainability though. Ive used python at scale and its fine if you have reasonably good code hygiene. The reason I'd want to rewrite in any of those languages is they're significantly faster _and_ are maintainable at scale.
        • ashishb 10 hours ago
          > Ive used python at scale and its fine if you have reasonably good code hygiene.

          True but that's the problem. Once you have a big enough team, it becomes an uphill battle to maintain that.

      • devman0 8 hours ago
        If you use the typing system (which I do religiously) Python becomes a lot easier to reason about in larger projects it also makes linters and refactoring tools easier to use.
        • iainmerrick 6 hours ago
          Maybe I'm just using it wrong, but typed Python seems a long way behind typed JS (i.e. TypeScript).

          In Python in seems like there are multiple type-checkers with widely differing levels of coverage, so it's not at all obvious which one to use, and typing is really spotty in third-party libraries. So you can get some level of type-safety but it doesn't feel very dependable.

          In TS, there's one canonical checker and the others work hard to stay compatible with it; and typing in third-party libraries is generally very solid. There are still some old libraries without types, but I think those headaches are mostly in the past now (similar to the Python 2 -> 3 switch).

      • roncesvalles 10 hours ago
        Exactly. A lot of people forget that Python is just shell scripting++, taken way too far.
      • kurtis_reed 12 hours ago
        Python is faster to write so obviously you'll see things built in Python first more often than the reverse. What's that quote -- "Better to remain silent and be thought a fool..."
        • ashishb 11 hours ago
          Indeed. Python is faster to write and harder to maintain over the long run.

          The "faster to write" advantage becomes less relevant if most code is going to be auto-generated.

          The "harder to maintain" might still remain more relevant.

          • tclancy 7 hours ago
            >harder to maintain over the long run.

            First off, this is begging the question. Second, if you never get to a point where you need to maintain something, who won?

    • DaanDL 13 hours ago
      I never really understood what exactly is so readable about python. I've been developing in Python for 8 years now, and before that I was a C# developer, and I don't find Python to be that more readable.

      Sure there's less ceremony, and yes, you can have your project going with just a single file, but other than that...?

      • KptMarchewa 9 hours ago
        I think the meme come from the fact that in 00s and early 10s most people looked at Python code coming from C++ and Java.

        In Java bad OOP conventions were commonplace, like everything using getters/setters, deeply nested class hierarchies and insane patterns like AbstractSingletonProxyFactoryBean. It got impossible to figure out what's going on.

        C++ just got every possible feature that badly interacts with each other, in an amount that never could fit in a single person's context window. That basically led to a situation where every programmer or company had it's own dialect of the language; the other dialects than your own were mostly incomprehensive.

        Python has it's own share of bad features, and for a long time really bad ecosystem around the language - Python 2 vs Python 3; eggs vs wheels; easy_install vs pip; 123489 ways of installing Python and each of them bad. But, once it started to become better, in the mid-late 10s, around Python 3.5 or 3.6, it exploded in popularity.

        • delecti 31 minutes ago
          The AI boom has really carried Python up with it, but it was quite popular as early as the mid '00s. I remember grumbling in college around that time that the CS curriculum was shifting from Java to Python, because I didn't like Python and thought it was a worse first language.

          Incidentally, even though I still hold those opinions, I can admit that history has solidly shown them to be unfounded.

        • boringg 8 hours ago
          Python data processing/ML in the 2010s became a huge asset for the language.
          • jimz 3 hours ago
            Ironically it also created a ton of really badly written Python in the process.
        • mjd 8 hours ago
          C++ and Java and … Perl.
      • bazoom42 13 hours ago
        C# is also a great language, but notice how it have been moving closer to Pyhon-style syntax. E.g. now you can initialize a list like [a, b, c]. They wouldn’t add that syntax if they didnt think it was an improvement.

        Less ceremony and boilerplate means more readable code.

      • dust-jacket 11 hours ago
        Reaaaally?

        I think a lot of the readability of python is in the fact you don't need to be recently familiar with it to pick up what its doing most of the time.

        Over my career I've dipped in and out of rust, typescript, perl, swift, etc codebases. I'm no expert in any of these, but every single time I have to look something up to understand what this set of arcane symbols or syntax means.

        When I dip into Python I just ... read it.

        (None of this is to say I prefer Python, just that I really do get the readable thing)

        • ModernMech 7 hours ago
          I dunno, as someone who doesn't program in Python, I find dunders to be very confusing. Like, how is this readable?

          _foo

          foo_

          __foo

          _Foo__bar

          __foo__

          foo__bar

          All of that is valid Python, and some of those forms mean different things depending on where they are used.

          • AndrewOMartin 6 hours ago
            The second, fourth, and sixth form is options aren't used AFAIK.

            Otherwise, a leading underscore indicates a private method but isn't enforced. A double leading underscore is also a private method but is "enforced" by giving it an unpredictable name. Double underscore (on both sides) means the function is digging in to python's API, like if you want to give a class some behaviour with + or = or [].

            It's not trivial, and not particularly intuitive, but it's not necessarily terribly confusing.

          • seunosewa 6 hours ago
            What do you mean? Those are valid identifiers but programmers aren't required to use them.
      • sundarurfriend 12 hours ago
        "whitespace, not brackets" from a sibling comment touches on it, but a lot of people, beginners especially (but not uniquely), are put off by symbols when reading code. Python is less symbol-heavy than most languages, by using whitespace and syntax and words (eg. `and` not `&&`, explicit `lambda x:` rather than `x =>`) in their place. It doesn't go so far as COBOL as to be cumbersome, but far enough to make a difference to a lot of people.
      • red_admiral 7 hours ago
        If you're doing non-CS academic research and you get only one course/module to teach the new grad students "programming", python it is. That you can get a project going with 3K LoC in a single file is a bonus in academia :)

        The scipy/numpy dataframes model is really neat though, python's has all the cool machine learning features, and since they're just a wrapper around some C++ and FORTRAN, it runs fast too if you do things properly.

      • jimbokun 1 hour ago
        Well yeah.

        Dropping the ceremony means all that’s left is the ideas and the intent of the code. Which is exactly what you want for optimal readability.

      • lukan 10 hours ago
        " and before that I was a C# developer"

        So .. you were already trained in reading abstract.

        A beginner on the other hand sees lots of intimitading {} in C family languages everywhere. And Python does not need them and less is usually better in design.

      • bityard 8 hours ago
        Python USED to be easy to read, before a lot of the newer features like type hints crept in. 20 years ago, Python looked like executable pseudocode.
      • trashb 11 hours ago
        I agree, especially very "pythonic" structures if overly shortened are hard to decipher especially if you don't use or read python on a regular basis.

        Often times when I am reading a medium or advanced python codebase I need to look into the function definitions and operator documentation to understand what is supposed to be returned. Where with C-like languages I feel it is easier to build that context because there is more context written and less tricky syntactic sugar.

        • bazoom42 11 hours ago
          > if overly shortened are hard to decipher especially if you don't use or read python on a regular basis.

          Sure, but this is the case for any language.

      • matsemann 9 hours ago
        I agree. My kotlin is readable. The functional code with typing all the way tells what every step is doing. My same code in python is a hot mess of nested list comprehensions and lacking lambdas.
      • fragmede 13 hours ago
        The "other than that" is whitespace, not brackets. Whether that's a big deal is up to you, but the carry on effect of that is that the code is indented the way the control flow interprets it, so there are no bugs from misplaced braces. (Plenty of other bugs for other reasons, unfortunately.)
        • strangegecko 13 hours ago
          I find brackets help me understand structure from a distance much better than whitespace.

          Misplaced brackets seem like a thing from the past to me when we didn't have IDEs. I don't remember ever having a bug due to that.

          • zahlman 12 hours ago
            > I find brackets help me understand structure from a distance much better than whitespace.

            I can't imagine how. Whitespace physically lays out the block structure on the screen; braces expect you to count and balance matching symbols, and possibly scan for them within other line noise.

            • KptMarchewa 9 hours ago
              This is a 00s POV. If you spend any time on syntax formatting in 2026, you're wasting it. It's a solved problem.

              Any reasonable language with braces has standard formatter that will just put each brace level on a different whitespace level.

            • pmontra 10 hours ago
              Nevertheless it happens that while moving code around one wonders what indentation level that code should go. Undo, undo or git show the original code, look at it, retry more carefully.

              Brackets would allow the editor to autoindent the pasted code.

              No choice is perfect.

            • polytely 12 hours ago
              Working in C# i feel basically still read code structure by the visual block structure / indentation. I dont think I've ever counted braces in my professional life. The IDE makes sure it is formatted correctly and ambiguity is basically impossible.
            • RHSeeger 8 hours ago
              Whitespace and braces work together to make the code more readable; both by the computer and the human. And they make it less likely to have errors, because the braces convey intent (much like parens in math when they're not "needed")
          • bazoom42 11 hours ago
            So you would find bracketed code without any use of indentation easier to read than python?
            • pmontra 10 hours ago
              It's no more 1990, when Python was born. Editors have been automatically indenting bracketed code for a long while. Probably notepad doesn't, or maybe plain vanilla vim.
        • johncearls 11 hours ago
          Whitespace forcing proper indentation practices has always been one of my favorite aspects of python. I TA'd a data structures in C++ class and the lack of proper indentation making code unreadable was my biggest pet peeve. I always made the student fix their indentation before I would help them debug it.

          I know that is mainly a beginner coding issue, but never having to deal with that issue was always one of the biggest advantages of python.

          That said, I believe a lot of the stuff that was added in 3 and beyond (to make it more typesafe, accounting for unicode, etc) has made it a lot less readable over time. You can argue that it has made Python a better and safer language, but the pseudocode aspect has gotten worse. I kinda miss that.

        • jeltz 12 hours ago
          Python and C are the only language in which I have experienced that class of bugs. And that is due to if statements without brackets in C and because Python has meaningful indentation which people have accidentally messed up when refactoring.

          And today with autofotnatters I think only Python is still vulnerable.

          • zahlman 12 hours ago
            If you are messing up indentation accidentally during refactoring there is either something wrong with your tooling (including your text editor) or you are letting things get too far out of hand before starting the refactoring.
            • forlorn_mammoth 7 hours ago
              It's 2026. I'm using Jupyter notebooks in Databricks. Guess what my tooling (including my "text editor", the Jupyter notebook), does not do?

              Yes, I can castle-[ to shift a block of code left or right, but this is not always problem-free nor is it automatic nor does it have any sense of where the indents should go.

              Yes, there is a "format python properly" button which often errors out says "there is an indentation error in your python so I cannot automatically indent it"

              Would I like to use better tooling? I present my .vim file as evidence. Am I using what they tell me is state of the art? yes. And in 2026, state of the art does not solve python indenting, because python indenting is inherently a broken paradigm

              • Sohcahtoa82 3 hours ago
                Does your tooling not allow you to select multiple lines of code and press Tab or Shift-Tab to indent/dedent the entire block?

                It usually only takes me a 1-5 seconds to fix the indentation when I copy/paste code that existed at a different indentation level. This is not something I'd complain about, personally.

            • RHSeeger 8 hours ago
              Ah, the old "you're doing it wrong" argument. Moving code from one place to another (copy/paste from online or just from one file to another) is a fairly common source of bugs for a lot of people when it comes to Python. At some point, it becomes clear is an issue with the language, not the people.

              I enjoy Python, but the significant whitespace is _not_ one of the reasons.

        • sophacles 3 hours ago
          There are plenty of python bugs from mis-indented code. Particularly given multiple parts of a flow that "else" can apply to.... for/else, while/else, if/else, try/else and so on. It happens quite often in python codebases I've seen.

          Also, good automatic formatters (gofmt, rustfmt, etc) also indent along control flow lines, so without the braces you just changed a syntax error into a "hmm, this is acting really strangely" bug-hunt by using python.

      • roncesvalles 10 hours ago
        People confuse having fewer keywords/concepts to learn for readability, which is not really the same thing.

        Someone who is equally expert at Java and Python will probably consider Java to be more readable.

        • bityard 8 hours ago
          The concept of "readable" is not really relevant for experts, because, well, they are experts. Being an expert automatically means you can read almost any line of code and know everything it does.

          Everyone else appreciates and is more efficient working with code that is intuitive to grasp.

        • RHSeeger 8 hours ago
          I have many years of experience with Java, and rarely use Python... and I'd say Python is, in general, easier to read. There's generally a lot less "having to go look at _other_ code to know what _this_ code is doing".
      • bjourne 11 hours ago
        Other than that? Exactly that!
      • huflungdung 11 hours ago
        [dead]
    • theshrike79 11 hours ago
      My preferences are always Go first and Python if there are specific libraries that make my life easier.

      Go is a simple target for LLMs as the language has changed very little and with the Jetbrains go-modern-guidelines[0] skill the LLM can use the handful of recent additions effectively

      And with Python there are things like ruff and pydantic that can enforce contracts in the code.

      [0] https://github.com/JetBrains/go-modern-guidelines

      • RHSeeger 8 hours ago
        The folks I work with rave about how well the LLMS work with Go.
    • randusername 8 hours ago
      Python is great at AI code gen for a combo of reasons: big stdlib, readable, 3rd party libraries to do most anything with great online documentation, big mind-share and presence online.

      The big one to me is that it's interpreted. Claude Code does these wild `python -c` "one-liners" that end up spanning a hundred lines or more. It's so ingrained that it does this for solving general problems to create on-the-fly system reports, not just when you specifically are using it for Python development.

      One of my more interesting experiments has been "mirroring" a Python codebase I maintain with a synchronized one in another language the AI maintains.

    • giancarlostoro 44 minutes ago
      I've been using Rust more because of Claude, but I mostly do C# / Python otherwise. I can read Rust, and understand it, but I don't have the patience to fight the compiler, or the liberty of free time. I do however have in my brain plenty of architectural ideas I can convey to a model and get back a sound program.
    • teleforce 12 hours ago
      I think this is where D language make an excellent alternative to Python for AI assisted coding [1].

      1) It's a very consistent language even if you compared to the other popular languages namely Python, Rust, C++ and Go. Try to perform doubly linked list with them and compare them all [1].

      2) It's probably the most "Pythonic" among the compiled language according to Walter.

      3) It utilizes GC by default, you can also manage your own memory and you can hybrid.

      4) It compiled fast and run fast, heck it even has built-in REPL eco-system.

      5) Regarding the small training set, with recent self-distillation fine-tuning approach it should be good enough, D (actually D2 version) has been around for more than a decade [2].

      [1] Looking for a Simple Doubly Linked List Implementation:

      https://forum.dlang.org/thread/osmecwfnpqahoytdqpkr@forum.dl...

      [2] Awesome D:

      https://github.com/dlang-community/awesome-d

    • speleding 2 hours ago
      > ... something you know well and is easy to read ...

      For this reason I tell my LLMs to use Ruby whenever possible. In one rare case where the performance of my script was critical, I told Claude to convert the working ruby script to Rust. It got it right in a single shot.

    • LukaD 11 hours ago
      > 2. Use a language that has a large training set so the LLM can be most efficient.

      I seriously doubt this is really the case. From my experience coding agents just love writing bad python code. It always needs explicit instructions for example to use uv instead of raw dogging pip. There is a lot of python code out there because it is being taught as a beginner language and because of that there is necessarily a lot python code written by beginners. That's my explanation at least for bad LLM generated python code.

    • niam 7 hours ago
      Python does have a huge training set, but I figure lots of that training comes from disciplines where maintainability or system design isn't as heavily incented. Reports, notebooks, dashboards, etc.

      My early experiments with LLM Python seemed to give me that impression, but I'm wondering if it's better now or people have other experiences.

    • jelder 7 hours ago
      There's a huge difference between a program which can be verified as correct by static analysis, and a program which can only be verified as correct by running it. Python is the latter (though maybe in between with gradual typing). The iteration loop just collapses when an agent is driving an LSP in a statically typed language.
    • javier123454321 11 hours ago
      No, I think the argument from the article is pretty good. Use a language that has a lot of guard rails built in.
      • nicman23 11 hours ago
        or a compiler that makes the llm sad
    • slifin 13 hours ago
      I would assume it's important to know what's in that training set too

      Because I get reliable generation out of "niche" languages already

      Is it code with lots of SQL injections used in a different domain to your own?

      It's maybe not good to conflate quantity with quality

      • fragmede 13 hours ago
        This is dated, but a professor told me that LLMs are really really good a generating bad pandas code because it's been trained on so much of it!
    • pplonski86 7 hours ago
      Python outputs is also very versatile. You can use Python to build command line script, web application, desktop app with GUI, notebook with data analysis, or Python package and share with others. It is many ways how Python code can be used by final user.
    • pmarreck 2 hours ago
      Python also has a rather high number of footguns, which is not a feature of other equally-capable languages.
    • PunchyHamster 12 hours ago
      > One obvious reason is Python's extreme readability, it has often been described as being as close to executable pseudo-code as one can get.

      But it's LLMs that read it not humans. At least that's the trend

      > Use a language that has a large training set so the LLM can be most efficient.

      It's pretty efficient with Rust.

      • subscribed 12 hours ago
        But plenty of humans like to be able to read the generated code and understand / edit that.
    • bfrog 6 hours ago
      Funny, I need an LLM to figure out what most people consider "readable" python as its highly unreadable to me. The lack of types, top to bottom flow, and more tends to make it all very confusing for me to read anything python that's > ~1000loc
    • seunosewa 6 hours ago
      Less verbose languages also use fewer tokens, saving precious context.
    • p-t 8 hours ago
      tbh python seems really unreadable to me, and i'm saying that as someone who's first [non-Scratch] programming language was that. stuff with syntax closer to C or javascript seems easier to see [C?] where stuff starts and ends
    • hn-acct 6 hours ago
      I personally don’t find it readable at all.
    • stackghost 2 hours ago
      As much as I dislike writing Go, I think it's pretty close to the ideal LLM language. There's usually one very obvious correct way to accomplish things, there's a ton of training data, it's GC'ed, the standard library is expansive and of good quality, there's a large ecosystem of 3rd-party packages, etc.

      About the only place where I don't think Go works for agent-heavy workflows is that it's not very concise. It takes a lot of Go code to express what other languages can do in many fewer lines, and I think this wastes Context Window but also just makes it harder to keep everything in my poor little human brain.

      LLMs also do a pretty good job writing modern C++.

      I much prefer writing Common Lisp but I've noticed that LLMs (claude 4.6+ and GPT 5.x) aren't nearly as good at writing Lisp than they are at more mainstream languages, plus Lisp's syntax makes it a little hard to read sometimes, especially if you're not in the habit of reading it every day.

    • psychoslave 11 hours ago
      Disagree, it's verbose, but it's full of needleslly verbose stuffs, use many _ for everything and the rest, and other opaque conventions. Not that any other dev ecosystem is free of any of these issues, but Python just don't shine much on them. If anything in term of script language, Ruby provides a far more solid ground for compact and readable exposure of ideas through something close to prosaic expression.
    • nicman23 11 hours ago
      c llm code is more readable as it probably trained on better code
    • odyssey7 8 hours ago
      Haskell is more readable. It looks just like pseudocode. Change my mind.
      • jimbokun 1 hour ago
        If Haskell coders could get past their obsession with naming all the important functions using only punctuation characters that might be true.
    • moffkalast 13 hours ago
      So in short, use Javascript /s
    • fennecfoxy 12 hours ago
      I think that pseudocode aspect is what makes it hard/frustrating to read for me.

      I'm more of a c++/TS/etc user, so I miss braces a lot. I think a basic Python script sure it's easy to read through, but a large project starts to get quite ugh.

      I am very jealous of Python's numerous built-ins though. I was looking for a JS sum function the other day and was surprised to see node.js still doesn't have a built in + you still cannot reference operator functions.

      • FartinMowler 11 hours ago
        But at least JS now has a built-in leftpad function ;) (called padStart).
      • fennecfoxy 11 hours ago
        Lmao are people really -ing me because I don't like Python. Tribalism is present in all areas of human life I suppose.

        You people should grow up. Programming languages are tools, not pets.

  • pshirshov 10 hours ago
    No reason, unless the project is simple. The more you can offload onto your compiler/typer - the shorter is the feedback loop, the better agents work.

    Lack of strictly enforced static typing make agents fail much sooner with Python. In my opinion, Rust and Scala are the best targets for agentic flows - and, coincidentally, they have the most advanced typers among mainstream languages.

    But any statically typed language behaves better than any dynamically/duck typed language. When I say "better" I mean delivery time and the amount of shipped defects.

    Another thing which helps (but not generally applicable) - ask your agent to verify critical protocols with formal proof in TLA+/lean/coq. Agents are bad at formal proofs - but generally are much better than most of the humans.

    • weberer 8 hours ago
      >Lack of strictly enforced static typing make agents fail much sooner with Python.

      Just tell your agent "Use type hints. Add a pre-commit hook to run ruff, black, mypy, and pytest." It will save you 99% of headaches.

      • loglog 5 hours ago
        The list of tools that Pythonheads present as a definite solution to their problems changes every year, yet the results are still far behind Rust/Scala/Kotlin/C#.
      • pshirshov 8 hours ago
        I've tested many flows involving linters. Results are far from ideal - agents tend to work around linters, mass-add ignore annotations, etc, especially in situations when fixing one warning/error triggers another (and that happens regularly).
        • jimbokun 57 minutes ago
          LLMs really are just like an intern in their first coding job!
        • the_af 7 hours ago
          What if we tell the agent NOT to add ignore annotations (or to ask about them if there's no other reasonable way to proceed)?
          • codybontecou 7 hours ago
            You're devolving into the realm of "What if we tell the agent to just get it right?"

            Relying on the prompt to ensure the code it writes is correct is where things fail. Types, tests, linting, etc. are deterministic tools the agents tend to respect.

          • pshirshov 7 hours ago
            They tend to ignore such instructions on first circular issue - even with Opus you have to kick it really hard, insist on generalization and intervene manually. In my opinion this is not a productive/workable approach for large projects.

            Typical failure mode: "I fix pyright error A, it causes pyright error B, pyright is broken, I will exclude both A and B through pyright config and will add ignore annotations for both A and B and will write a couple of idiotic comments about that".

    • tdeck 9 hours ago
      What if you want to offload a lot of the work to libraries rather than generating (and presumably reviewing?) it yourself? Python has a very strong ecosystem of useful libraries because it's been around so long and is popular in a number of application domains.
      • petra 7 hours ago
        You could use Scala, get the strong typing, but also get access to the Java and also Python/JS libraries and others via various interop mechanisms Java has.
        • bombcar 6 hours ago
          And you also get compile times so long you’ll never run out of tokens!

          I kid, I kid, but seriously …

          • beastman82 6 hours ago
            "sbt --client" is really fast for me, and I'm using derivation and implicit scope etc. super warm JVM, incremental compiles.
      • pshirshov 8 hours ago
        Well, Python is not the only language with a mature ecosystem.

        Also, in many cases it's cheaper to rewrite a small lib instead of fighting crappy code - but that applies regardless of the target language.

    • qoez 8 hours ago
      I feel like types doesn't really make a difference in this context. Types are good for enforcing standards between in a multi human situation, but AI doesn't really make the same mistakes that warrants it.
      • pshirshov 8 hours ago
        Models are trained on human data and they make exactly the same mistakes - and a lot of mistakes humans don't usually make.

        Also they are extremely bad at high-level design.

    • empath75 7 hours ago
      I would argue that you should use python if the LLM is writing code for a non-programmer to understand and contribute to. Ie: things like notebooks.
    • echelon 9 hours ago
      Rust is a fantastic language to emit from AI.

      Studies report that the language design tends to result in lower defect code (vs peers such as Go and Java) due to how the syntax aids error handling, logic flow, and API design.

      You don't need to know Rust to begin using it. You'll learn it quickly enough.

      The code is easy to read, and Serde makes parsing, especially JSON, extremely pleasant. Writing HTTP services is a breeze.

      AI makes Rust development go 10x faster. The borrow checker isn't even an issue. It's invisible now. You almost never hit it anyway when you write web services, but now it's no issue at all when writing highly concurrent code too. Claude etc. emit the correct code and lifetimes, and it's entirely ergonomic and idiomatic.

      The biggest problem with Rust is the compile time.

      • joaohaas 9 hours ago
        What study?

        And I don't see how Go design patterns would be any worse. The main issue people have with it is the repetition/verbosity, which LLMs handle just fine.

        • ModernMech 7 hours ago
          The repetition and verbosity makes it more expensive for the LLM to write. You'd want a language that is expressive and dense if you're optimizing for token usage.
          • bombcar 6 hours ago
            APL would seem to be perfect!
      • oblio 9 hours ago
        > AI makes Rust development go 10x faster. The borrow checker isn't even an issue. It's invisible now.

        What happens when things break and the AI agent can't fix it?

        • jmalicki 9 hours ago
          That would be bad, which is why Rust is preferred to python. In Rust when things break the AI gets a clear error message that makes it clear how to fix. In Python when things break, the AI will randomly spin its wheels for days trying different things without being able root cause the issue.

          A failure to compile is by far the easiest thing for the AI to fix.

        • echelon 9 hours ago
          You'll learn Rust faster with AI and should be able to solve it yourself.

          You're unlikely to wind up in such a situation though. The design work Claude does in Rust is really sensible and idiomatic, and I really don't think you'll be unable to refactor or redesign things. Claude is extremely good with Rust generation, refactoring, and manipulation.

          I'll go as far as to say that AI has removed most of the complaints people had with learning or using Rust. It's not even a speed bump now.

          • oblio 7 hours ago
            I would imagine this assumes the operator is still familiar with the borrow checker, lifetimes, etc. Otherwise how can the human operator know if what Claude does is actually sane (i.e. it compiles but doesn't do what it actually should)?
          • therouwboat 8 hours ago
            Come on now, how do you learn anything if you just tell claude to do something and feed it error messages?
            • Yokohiii 7 hours ago
              Yeah, I don't think you really learn a language with agentic coding, at least you wont become fluent writing it. I currently use copy pasta chatbot help to write some C code and I am still often stuck how to progress manually. And I refactor a lot because the LLM code is subpar for me.
              • echelon 6 hours ago

                  > Yeah, I don't think you really learn a language with agentic coding
                  > [...] 
                  > And I refactor a lot because the LLM code is subpar for me.
                
                This is learning.
                • Yokohiii 6 hours ago
                  > Yeah, I don't think you really learn a language with agentic coding, at least you wont become fluent writing it.

                  Don't misquote me for your agenda. It's a statement about the quality of learning.

                  • echelon 4 hours ago
                    I'm presuming you don't like AI (apologies if I'm mistaken).

                    Just because someone doesn't like some tool doesn't mean someone can't learn using the new tool or method.

                    Users of an old technology often adopt a hostile disposition of a new technology that threatens their skill. To claim people can't learn at a higher level of abstraction is absurd. Kids with motivation are smart, and they will outpace the older generation.

                    If I had the advantage of LLMs and agentic coding when I was a teenager, I could have gone wider and deeper in my career. I'm jealous that young learners are going to be able to do more than I could at their age. I'm happy for them.

                    If you're using AI to vibe code, then editing the results - that's learning. Period. That's a feedback loop. And it's probably more interesting and rewarding than what we had.

                    • Yokohiii 1 hour ago
                      I use chatbots a lot to explore things that I am not so well versed with, about coding and other things. As an entry point it's very useful. LLMs also help to discover more advanced knowledge that are opaque otherwise. Of course using and LLM can be a useful learning process. It would have helped me in my early career too.

                      If you never code yourself, I don't think your muscle memory will adapt to what you learn. This is practically the same for me when I read a language reference, I read it, think I got everything and then I open my editor and can't type, I have to go back and read up every bit that I want to type. So the problem is probably not even LLM specific, it's just the lack of repeated typing. And yes, I think even with LLMs manual typing is useful. Often very subtle things are hard to explain and easier to type. If you don't have it in your muscle memory, you are less efficient.

                      I am not convinced that vibe coding will teach you the right things. Writing code is one thing, making good decisions is a whole another level. You learn that only by failing over and over. A beginner wouldn't even understand his own architecture and data structures he generated, so he wouldn't understand why he failed or how to improve. LLMs also respond very varying on the "right" way to deal with problems. I often disagree with them; they may have incomplete knowledge or just prefer their overtrained "best practices" or worse they just give different answers based on statistical variance. If you need any decision, they are good, if you need a quality decision that is perfectly suited to your constraints, the require a lot of instructions and will still fail.

                      I don't hate AI, I hate that some people are very naive about it's usage and usefulness. I don't see that AI threatens my skill, it probably threatens parts of the things I've delivered in the past. But to be honest, those were the boring parts. Let the vibe coders do them. But if you really think/hope that LLMs will excel certain coding tasks, then you should be wary to specialize in them. Because one day, they wont need your help anymore.

  • _boffin_ 20 hours ago
    Read the first few comments and surprised I didn’t see it, but training data. The voluminous amount of Python in the training data.

    I could write in brainfuck with ai, but I presume, wouldn’t get the same results than if going with python.

    My follow up question: with AI now, why care about a lang until you need to?

    • gertlabs 19 hours ago
      Surprisingly, LLMs are actually much worse at reasoning in Python than other common programming languages for agentic coding tasks.

      Data here: https://gertlabs.com/rankings?mode=agentic_coding

      • BariumBlue 18 hours ago
        Hah, I was just thinking that Python likely has a vast ocean of training data, but it's likely of lower quality, being much of it is written by beginners and those who aren't primarily programmers.
        • kraf 11 hours ago
          That's what I'm thinking too. There is a lot of noise and I know teams where the majority of the people writing Python just have no idea what they're doing.

          I'm working with Clojure which is used mostly by senior engineers and it still blows my mind how well Claude writes software in it even though it's a fringe language. It's even able to pick up in-house DSLs written with macros.

        • smoe 10 hours ago
          Having used Python on and off for 20 years, my experience with LLMs writing Python has been mixed. I don’t think that’s necessarily because of a low-quality dataset, but rather because Python’s applications are so broad and the language has gone through several paradigm shifts over time: sync vs. async, typed vs. untyped, scientific Python looking very different from web application code, some people really wishing it were an FP language, and others doing the clean-architecture OOP onion soup. It has gotten so fragmented.

          Recently, I had a more pleasant experience using LLMs with Go. It reminds me a bit of Python 2.x, when the community seemed, in my view, more focused on embracing a stupid simple language, with everyone trying to write roughly similar "Pythonic" code.

          • stingraycharles 9 hours ago
            > Having used Python on and off for 20 years, my experience with LLMs writing Python has been mixed. I don’t think that’s necessarily because of a low-quality dataset, but rather because Python’s applications are so broad and the language has gone through several paradigm shifts over time

            If there’s one language that is the prime example of this, it’s C++, and according to this benchmark it ranks incredibly high.

            I’m also thoroughly confused why Kimi 2.6 scores 83% while Opus 4.7 scores 67% for C++, GPT5.5 isn’t even in the top10.

            Gemma 4 31B scores 100% success rate for Python (!!) while Opus 4.6 only 65%.

            This benchmark really seems to be all over the place and doesn’t make sense.

        • dariusj18 17 hours ago
          That was the hardest part of learning PHP, all the code examples online were just awful.
          • andai 13 hours ago
            Worked on a PHP project once. Every time I asked why something was done a certain way the answer was "dunno, we copy pasted this code snippet."

            Certain popular PHP codebases appear to use a similar methodology.

            • Sohcahtoa82 2 hours ago
              It's why I consider PHP to be "RCE as a Service".

              So much copy/pasted code, some of it REALLY bad, and PHP has a lot of foot-guns that can lead to RCE.

        • stefanfisk 17 hours ago
          Reminds me of the time I asked Claude to write some Wordpress code for me. The results were…rough.
        • librasteve 12 hours ago
          I was (pleasantly) surprised by Claude Code doing Raku - also with a limited training set (~2000 Stack Overflow, a bunch of Rosetta, 2,500 modules). I put this down to the quality of the code for the core community who are all frankly uber-gremlins.
          • polytely 12 hours ago
            Yeah Raku feels so expressive and lovely to me with the help of an AI assistant. I've only done toy programs and scripts with it but it is actually so nice.
        • FireBeyond 16 hours ago
          All my vibe coded projects (personal) are Go backend services, with Typescript/React frontend. And my thoughts were based on similar things. Like why I wouldn't use PHP for that, either.
        • topham 18 hours ago
          There's a broken idea that AI know Python because they're written in Python.

          Not how any of it works.

          • dasyatidprime 14 hours ago
            Not what anyone was talking about. Training corpus ≠ inference engine.
          • gertlabs 18 hours ago
            While recent models are capable of generalizing to any language at this point, I do think there are weights from their pretraining corpus that still leak through into how they create their responses. We observed similar language performance patterns across models from different providers, btw.
      • stingraycharles 15 hours ago
        I’m super surprised that C++ scores so high, this does not match our experience at all, and for anything performance critical it always drops the ball completely.

        I also don’t understand how these “games” map to real world complex problems. How are you measuring success? How does “adversarial customer service” map to “this LLM is better at C++ than the other” ? How are you sure you’re not just benchmarking language suitability for a problem ?

        I have so many questions about this…

        • shimman 1 hour ago
          It's a VC backed company using "scoreboards" as a means of marketing. There is nothing scientific or academic about this, stop assuming as much.
        • gertlabs 14 hours ago
          - The majority of the environments can be played where the agent writes code to work the environment towards a goal. So the model is problem solving, and it has to do so in a particular language, and some languages outperform others. We have a lot of data to back up the improved compiled language performance, but note these are for successful code submissions (failures are counted in a different metric). With the Languages chart we're moreso measuring how good the ideas they came up with were, once they already compiled/didn't fail basic environment rules.

          - You need to run evals at scale to converge on this kind of behavior: these benchmarks run samples across a pool of hundreds of different types of environments

          - Some games are too open-ended to support code play. The customer service game is an example of that, where models are called on every tick of the environment to make a decision (that's the 'decision making' part of the evals which is weighted lowest). Very interesting results but not testing coding ability, just general reasoning.

          Not sure what issues you have with models writing C++ vs other languages, but I can imagine all sorts of C++ specific bottlenecks not directly related to the model's ability to reason in the language, like the dependencies, verbosity, extra effort to manage memory, etc. I have only done a little C/embedded work since agentic coding happened but I was pleasantly surprised.

          • bdamm 14 hours ago
            I've found the current cream of the crop to be quite good at resource management. I've sic'd Opus on some very gnarly lambda context bugs and it has directly improved the stability of the product I'm working on right now in a very substantial way. It couldn't quite do it entirely by itself, but with the right nudges here and there, it has absolutely accellerated the debugging work. It is particularly good at analyzing crashes and piecing together the detective work of what preconditions must exist for certain crashes to occur.
          • stingraycharles 12 hours ago
            I think my problem is that I’m not sure I understand whether you evals are testing language abilities or reasoning abilities.

            It seems to present results as if they’re testing language abilities, but the problems seem to be reasoning problems.

            • gertlabs 4 hours ago
              I'm not so sure there's a difference. The main thing we want to measure for LLMs is broad reasoning capability, but seeing how that ability changes under different constraints (like programming language) is the interesting part.
      • doug_durham 1 hour ago
        Reasoning??? Since when to LLMs reason. The word "reasoning" has become almost worthless in modern computer science due to misuse.
      • isityettime 18 hours ago
        I would love to see how they do with functional languages and especially Lisps here. I've noticed pretty good performance with Emacs Lisp relative to overall model strength, but I haven't used LLMs to application code in any such languages.

        It would also be interesting to see how Python compares to other languages in its niche (Ruby, Perl, Raku).

        Thanks for putting this together! It's interesting.

        • regularfry 12 hours ago
          I've noticed that with clojure(script) unless you specifically instruct them to keep nesting levels low, they can hit a point where they make a paren placement error and can't debug their way out of it. Although in my case while one model made the error then couldn't find what it had done, a second model that I switched to was then able to identify it and back it out. So I suspect this is a transient weakness in today's models, not something fundamental.
          • isityettime 6 hours ago
            It's a bit of a pitiful way to fail. I wonder if diffusion models could handle parenthesis matching better. And I wonder if you could rig up tools for structural editing like with paredit.
            • regularfry 4 hours ago
              It's one of the drawbacks of having quite so little syntax. There's just less to grab hold of.
          • forlorn_mammoth 7 hours ago
            That's because you are holding it wrong. Just replace the ( with rs, like in strawberry.
        • gertlabs 17 hours ago
          That's a good idea. Would you rather see Lisp or Scala? Any interest in Prolog? We are trying to be selective to keep the data concentrated, but we will eventually add a couple more, most likely to sample different programming paradigms.
          • 1659447091 14 hours ago
            If you are taking request, I was hoping to see clojure on there.
            • andai 13 hours ago
              My spider sense tells me the immutable-ness would help with correctness, but I'm not sure how much difference it would make in practice. Would love to see some numbers.

              A relative lack of training data might have a bigger effect though.

          • isityettime 10 hours ago
            I think Clojure would probably make for a more interesting comparison because its syntax is more different from the other languages currently on there and it's less multi-paradigm than Scala is (it doesn't support OOP, it's more explicitly immutable-first). I think Scala is a lovely and cool language, but I'd be more interested in the Clojure comparison here.

            Prolog night be interesting because I bet nobody is trying to train very hard on it, but I'm less directly interested in model performance with Prolog.

            • iLemming 1 hour ago
              > it doesn't support OOP

              That is only accurate if OOP means "inheritance-based class hierarchies with mutable state" - which is one narrow definition of it. Clojure has solid OOP support, just not in the class-hierarchy-first sense.

          • phillc73 15 hours ago
            Just last night I was going down the rabbit hole of "what's the best programming language to use for vibe coding." I came to a short list of:

            a) Typed Racket

            b) OCaml

            c) Julia

            I would love to see those three added to your benchmarks. And Mistral Medium 3.5 added to the LLM list, please.

            • gertlabs 15 hours ago
              Thanks for the recs, we will look into adding some of these, maybe OCaml for variety. I'm not familiar with Racket.

              Mistral Medium 3.5 is on there, but you will have to scroll down pretty far to find it (does not perform well): https://gertlabs.com/rankings?mode=oneshot_coding

              • isityettime 10 hours ago
                Racket is a variety of Scheme that grew up as a teaching language, but now also has a few other notable niches as well.

                Typed Racket is to Racket as TypeScript is to JavaScript: it adds some additional static checks to an otherwise dynamic language via gradual typing. This pair of languages might help begin answer the question "does gradual typing generally help LLMs, or does TypeScript outperform JavaScript for incidental reasons?".

                Among Lisps, I'm most interested in seeing Clojure because it's a language I can see myself using with LLMs at work. But Typed Racket and Racket could make an especially interesting pair because of the gradual typing thing.

                I'm not sure whether you want to include them in your project. The kind of selectivity you describe yourself as going for is hard for me, especially since I'm not the one doing the work. :)

                PS: Aside from this benchmarking and comparison project: Racket is an interesting language and seems like a good place to start if you want to explore classic Scheme texts (Structure and Interpretation of Computer Programs, The Little Schemer, How to Design Programs) or newer ones that try to teach newer or more specialized ideas (e.g., The Little Typer). You may have to tweak the language a bit to stay faithful to some of those books, but that's something Racket is good at and there are already sources noting relevant differences online.

                When a non-programmer in my life expressed curiosity about programming, we ended up starting HtDP together and it's been fun. I think Racket was a good choice for that.

              • phillc73 15 hours ago
                Thanks for that, I hadn't scrolled down far enough.

                Just want to be sure I'm reading the results correctly... When I compare GPT-5.5 with Mistral Medium 3.5, I see in the tables:

                a) Mistral beats GPT in Java and C++

                b) It's close for Rust

                c) GPT-5.5 easily wins for Go, Javascript, Python and Typescript

                Model choice really does appear to be language dependent (assuming I'm reading the results correctly).

                • gertlabs 14 hours ago
                  The deeper you go into the filters (single models, cross correlated by specific languages), the smaller your sample sizes. A known limitation, tbh I doubt Mistral is better than GPT 5.5 at programming in any specific language and probably hit a few lower quality generations by GPT 5.5 by chance (but I could be wrong! We're always adding more samples so data improves over time. We always prioritize largest sample counts for near-frontier models first).
              • regularfry 12 hours ago
                What's going on with Qwen3.6 27b? Filtered to Python it comes out at the top of the list, which seems... well, unlikely.
                • gertlabs 5 hours ago
                  The more filters applied (one-shot coding only, Python only), the more variation you can expect from fewer samples -- that being said, it really is a great model so it's probably not too far above where it would end up with infinite samples.
                • johndough 10 hours ago
                  While Qwen3.6 27B and 35B-A3B are very good, I am skeptical about them being that good. I think another factor is at play here.

                  The Qwen3.6 models have memorized some common games. For example, if you ask it to create an index.html with a snake game, it will generate almost the same high quality snake game every time. The relatively low success rate of 25% but high average percentile of almost 100% for one-shot coding in Python suggests that the model is extremely good at few tasks.

                • 2ndorderthought 10 hours ago
                  Qwen3.6 27b is a really strong model.
                  • regularfry 9 hours ago
                    Yeah but that strong?
                    • 2ndorderthought 9 hours ago
                      Yes that strong. Its only lacking in context length, but it's not that small there and it gets caught in circles more often then say a 1t parameter model does.

                      That's why a lot of people have been freaking out about local LLMs since april. There's finally a decent model that runs locally on a GPU or two that can do agentic programming at a reasonable enough tokens per second.

                      • johndough 5 hours ago
                        > it gets caught in circles more often then say a 1t parameter model does.

                        I've found that the Q5+ quants are less loopy than Q4. Still not perfect, but noticeably better.

                        > reasonable enough tokens per second

                        The speed has been amazing. I've been running the recent llama.cpp MTP branch with an uncensored variant of Qwen3.6-35B-A3B on my RTX 3090 over 170 tokens per second and it was able to turn a buffer overflow into a reliable shell exploit in just a few seconds (with reasoning disabled). Still a bit loopy though. Hopefully, the Qwen team will pay more attention to those looping issues. It feels like their models are especially susceptible.

            • andai 13 hours ago
              Those are some fine languages, but how did you pick them? What was the criterion?
              • phillc73 12 hours ago
                The initial criteria was strongly typed and functional first. Using an LLM for answers, of course, that returned me a list that looked like:

                - Haskell

                - OCaml

                - F#

                - Scala

                - Gleam

                - Purescript

                - Grain

                - Idris

                Then I asked if there were any Schemes or Lisps that met the initial requirements, which added a bunch more options (Typed Racket, Typol, Elm, ReScript etc).

                Then I asked about Julia specifically, as it's a language I'm already reasonably familiar with and knew that it's possible to write it with static annotations.

                Next I started filtering the list based on additional criteria; didn't want to target a JS compilation target, performance, size of package ecosystem, tooling, community, learning curve (I do want to review and understand the output).

                There were a bunch of follow-up questions over a few hours of prompting, reading and a couple of beers. All this resulted in the shortlist of OCaml, Typed Racket and Julia.

                Julia pretty much remains in there, even though it doesn't really meet the strongly typed initial criteria, based on my familiarity, the ecosystem especially for AI/ML tasks and performance factors.

                I know zero about OCaml and find the thought of learning it a bit daunting. Typed Racket seems more approachable anyway.

        • librasteve 12 hours ago
          I just did a side-by-side with Claude Code Python vs. Raku for DSL use ... https://slangify.org if you are interested.
      • fulafel 16 hours ago
        What would comparing rates across languages tell in the context of this benchmark? Are the tasks the same or robustly difficulty-normalized across the languages?

        Also somehow the 2 language comparison graphs (avg percentile and success rate) rank Python in dramatically different positions, with Python outranking Rust and Java in the success rate. What does the avg percentile mean in this context?

        • gertlabs 13 hours ago
          Success rate measures the amount of code submissions that played the game/environment without failing (compilation, breaking game rules, violating sandbox, etc.), so it makes sense Python would do better there.

          Percentile compares only the submissions that didn't hard-fail. So they are a bit different, and we incorporate them both into the combined score.

          • Yokohiii 6 hours ago
            Comparing rust to javascript, the gscore is rather similar in distribution, while python falls off. I don't see why python should be so much worse?
            • gertlabs 5 hours ago
              This was an unexpected result, and it held up under large sample sizes.
      • robot-wrangler 16 hours ago
        > Data here: https://gertlabs.com/rankings?mode=agentic_coding

        Oh wow, we got "tribal domination", "market simulator" and "adversarial customer service". I don't know what those are but it sure sounds like big torment nexus milestones

        Maybe we could at least play nicer games like hackenbush and act surprised when there's some wicked use-case that's isomorphic.

        EDIT: Ok fine. I like "Rubik's Cube Chess" a lot. Never heard of it, is this analyzed formally at all? Hard to search for since there's tons of collisions

        • gertlabs 28 minutes ago
          Not formally analyzed -- in practice, we see a lot of repetition/draws from code submissions. Our version is custom, and uses more pawns, which can move in any direction but don't upgrade to other pieces. We try to include just as many cooperatives games as competitive games, but both are important for measuring model ability in the real world.
      • js8 16 hours ago
        The LLMs are generally still pretty bad at (deductive) reasoning. IME they go along more with the things like variable names and comments than the actual program logic (it would be an interesting experiment to compare LLM's understanding of three identical programs with different identifiers, one with normal identifiers, one with obfuscated identifiers, and one with deliberately misleading identifiers). I also think this particular comparison comes down to typing, which helps to avoid LLM's reasoning go astray.

        When we reason we need to typically propagate the constraints to arrive at a solution to these constraints. I think the best language to reason in could be something like Lean, which allows both constraints and actual code to be expressed at the same time. Although this might not be the case for current LLMs, as I explain above.

        • by364 14 hours ago
          wait till you look inside a neural network and realize they're incapable of deductive reasoning! amazing how many devs that talk about "AI" would probably have a hard time telling apart deductive and inductive reasoning.
          • js8 14 hours ago
            That's actually untrue. Yes, training a neural network is mostly inductive reasoning process. However, the ability of LLMs to reason deductively (as a chain of thought, although it's probably not the only mechanism) is an emergent phenomenon, rising up from the training it on data and problems that exhibit deductive reasoning.

            But of course, because the deductive reasoning is inductively taught, there might be various shortcuts which compromise the soundness of deductive reasoning. That's why my claim - LLMs are not as good at it as other algorithms, although they have many other strengths that make up for it.

          • kelseyfrog 14 hours ago
            How so?
      • bushbaba 18 hours ago
        Cool to see my hunch be backed by data. Python is a scripting language with OOP bolted on. Means there’s not really a styling consistency that other languages have, with things tending to look like PHP, a collection of various scripts that invoke one another
        • toxik 15 hours ago
          Python was designed with objects in mind from day one.
          • regularfry 12 hours ago
            "Designed" is doing a lot of work here. There are clearly bits that are just bolted on because they didn't want to change the syntax.
        • Daishiman 1 hour ago
          They don't tend to look like that. Grab any one of the packages people use in Python at is generally well-disciplined OOP code.

          There are teams of people who don't come from an engineering background who do utilize Python as a series of scripts with some extra sugar. Just because you can do that doesn't mean that you should.

        • nsbk 15 hours ago
          EVERYTHING in Python is an object. I’m not sure how that could have been bolted onto the language
      • w0m 17 hours ago
        Huh. This surprises me. Digging, it seems it looks like it comes down to interpreted + dynamically typed vs compiled and statically typed.

        TIL. If i were to start a truly vibe project; Go would have a significant leg up.

        • dnautics 17 hours ago
          and yet dynamically typed elixir wipes the floor with go.

          https://github.com/Tencent-Hunyuan/AutoCodeBenchmark/blob/ma...

          • bontaq 15 hours ago
            LLMs get ridiculous with elixir, especially with the repl, runtime, and ability to hot reload / directly test functions. It's really surprising to me it hasn't caught on more but I guess you have to see it to believe it.
            • cultofmetatron 12 hours ago
              built my startup in elixir and can concur. elixir has a relatively consistent syntax that makes for a pretty good target for llms.

              In my opinion, the only thing holding elixir back as an llm deliverable is that there's not as much training data for llms to work with.

              Of course if we had a new AI that could be trained on a minimum of existing training data, common lisp would absolutely beat out everything else. everything you mentioned about elixir (repl, runtime, and ability to hot reload / directly test functions) are possible and were invented in lisp with an AST instead of a syntactic language as the ultimate build artifact. CL lets you recover from exceptions and rewind the stack before reloading your fixes and continuing. I can't even fathom the workloads an LLM could conceive of working with that.

      • riedel 14 hours ago
        My feeling is that for agentic tasks this is not only language design but also LSPs, error messages and static analysis capabilities that dominate the benchmarks. It would IMHO be interesting to look into better subsets of python and style/rewrite techniques as well as alternative linter and their effects on performance.
        • kevinautumn 14 hours ago
          A strict compiler is basically a free feedback loop for the LLM.
          • andai 12 hours ago
            Also the human. (I like being told about my bugs when I write them, instead of at some generally much more unpleasant moment in the future.)
        • andai 12 hours ago
          But then why does JS score 50% better? (Almost identical to TypeScript.)

          Actually, JS can get a surprising amount of "intellisense" as well. Not sure if that was used here though.

        • gtrealejandro 14 hours ago
          [dead]
      • hooloovoo_zoo 14 hours ago
        Mm, the code is constrained to run inside a game 'tick'?
      • andai 13 hours ago
        I thought it might have to do with the type system, but JavaScript type system is atrocious and it scores about 50% higher. So my theory does not make much sense.
      • altmanaltman 18 hours ago
        Hey they said it had a lot of training data, not necessarily high-quality python code training data.
      • ricardo_lien 18 hours ago
        This surprised me, but I can understand it - Python sucks in many ways lol.
      • goodmattg 17 hours ago
        [dead]
      • rossjudson 18 hours ago
        My standard joke here:

        Q: Say, what does this Python code do?

        A: Nobody f&%^ing knows.

        • thfuran 18 hours ago
          That’s Perl.
    • dillon 14 hours ago
      I had an itch to give Perl another go after a 5 year hiatus. I wanted a super simple way to spawn a proxy I was building in Go, along with writing various integration tests. I used Claude Code to write the bulk of it and found Claude to be remarkable good at Perl. I told Claude to only use what’s built into Perl’s standard library rather than reaching for anything in CPAN. Turns out everything from HTTP clients, TLS and JSON are all builtin which makes it a very stable and easy way to replace what I would normally have implemented in shell scripts. My theory is because Perl hasn’t changed all that much and has a ton of training data that Claude is actually quite good at Perl for cases where you might think to write shell scripts.
      • obelos 1 hour ago
        I'm not sure it's really from the lack of change, though. I've used Claude, Kimi, and other LLMs to write a ton of Perl that's jacked up with weird sugaring packages like Moose and Function::Parameters with reified types, and they seem to pick up the new idioms pretty effortlessly. It's a really unexpected fluency, frankly.
      • imhoguy 7 hours ago
        Plus Perl has very efficient minimal syntax, with "Perl golf" training set it is almost like ascii bytecode for LLMs.
      • hiAndrewQuinn 13 hours ago
    • bensyverson 19 hours ago
      Just use Go. LLMs have seen a ton of it, they write it well, it compiles practically instantly, and it has all the advantages of a typed compiled language.

      I created a big Python codebase using AI, and the LLM constantly guesses arguments or dictionary formats wrong. Unit tests and stuff like pydantic help, but it's better to avoid that whole class of runtime errors altogether.

      • mbreese 19 hours ago
        That’s what I’ve settled on. Python is so flexible that there are a million ways to organize code, pass arguments, etc. If you already have a code base to work from, an LLM can make new code in the style of the old code. But a fresh project? Once you get to a certain level of complexity it quickly can turn into write once, read never code (even if the code is passing tests).

        This is where I’ve found that a compiled, strongly typed language (any one really) works well with an LLM. With the little bits of friction that is part of writing a language like Go, the LLM can produce pretty decent (and readable) code.

      • shepherdjerred 18 hours ago
        Why use Go when you can use Rust?
        • bfung 18 hours ago
          1. Amount of Rust training data isn’t as much as Go.

          2. Golang syntax and style is very verbose yet simple. There’s not as many options nor programming language to domain mapping needed as in Rust. Leads to needing less sophisticated LLM to spit out Golang than Rust successfully and efficiently.

          • adastra22 16 hours ago
            This must really depend on your niche. I assume you do web stuff or something? Good luck finding any golang examples in a lot of other fields. Rust, on the other hand, is taking over the world in systems programming.
            • coldtea 13 hours ago
              >Good luck finding any golang examples in a lot of other fields.

              There are go examples (and full blown programs) for anything, from servers to Kubernetes and Docker.

              • adastra22 6 hours ago
                Yes. Maybe I should have said web stack?
              • Yokohiii 6 hours ago
                which is pretty much webdev for the larger industry.
            • benjiro3000 6 hours ago
              [dead]
            • krilcebre 13 hours ago
              Been reading and drinking that kool-aid for some time until I realized it's just an internet bubble mumbo jumbo. Majority of systems are still written in C and C++, and will be for unforeseeable future.
        • wiseowise 13 hours ago
          So I can test my feature today instead of waiting until it finishes compiling tomorrow.
          • baq 11 hours ago
            this is the top reason for a reasonably complex project, but it can be worked around by preplanning crates.

            the other reason is if you really want async as is in vogue nowadays, function coloring - but this is rapidly becoming irrelevant, see article.

            • wiseowise 10 hours ago
              > but it can be worked around by preplanning crates.

              Maybe if you're working alone.

        • bensyverson 18 hours ago
          In short, compile times and a more full-featured stdlib
        • Aerroon 18 hours ago
          Doesn't Rust have long compile times? Does Go suffer from the same problem?
          • bdamm 14 hours ago
            One of the design goals of Go was to be fast to compile. And they achieved it.
          • adastra22 16 hours ago
            Go famously has stupidly fast compile times.
        • DeathArrow 15 hours ago
          Because LLMs are better at Go? And because some people understand Go code easier and they might want to look at the code?
        • Alejandro2026 18 hours ago
          why,i have same question
          • bionhoward 18 hours ago
            I’m heavy into rust and never really use golang, but one big benefit of go over rust is compile times are significantly quicker, which could be more fun if you’re running CI checks 50 billion times
            • coldtea 13 hours ago
              >which could be more fun if you’re running CI checks 50 billion times

              Even running them 5 times it's WAY more fun

        • up2isomorphism 13 hours ago
          why use Rust when you can use Zig?
      • morningsam 12 hours ago
        >the LLM constantly guesses arguments or dictionary formats wrong [...] it's better to avoid that whole class of runtime errors altogether.

        Use Mypy in strict mode and run it in the post-turn hook of your LLM harness so the LLM has no choice but to obey it. And don't use overly general dictionary types when the keys are known at development time; use TypedDicts for annotations if you must use dicts at runtime.

      • mountainriver 19 hours ago
        Why? Go has a GC, is basically incompatible with C and very limited overall
        • Alejandro2026 18 hours ago
          Go's limited syntax is actually a feature here,because it stops the LLM from trying to be too clever
          • spongebobstoes 17 hours ago
            LLMs use `any` types, `recover`, `init`, and other weird warts of golang

            rust is a better language in every way for LLMs: more precise typing, better compiler errors, fewer performance footguns, no race conditions, clear interface definitions and implementations

            golang is easier for humans to quickly get productive, but the language is lacking in helpful features for an LLM

        • baq 11 hours ago
          'incompatible with C' isn't a serious problem nowadays and won't be a problem at all in a couple years.
          • mountainriver 6 hours ago
            or the language was fundamentally designed to make mid level engineers more productive...
        • badc0ffee 16 hours ago
          CGO exists.
          • mountainriver 6 hours ago
            uh huh, how's the trampoline working out?
      • trimbo 18 hours ago
        Yup, adopting Go is exactly what I've done too.

        Typed, garbage collected, fast to compile and run, stdlib that includes just enough to work out of the box. I really don't like writing it by hand but for the LLM it's perfect.

      • hirvi74 19 hours ago
        But what is the selling point for Go? I get that it is allegedly hailed to be a simple language with basically no batteries included, but why is that a selling point? Does Go excel at anything no other language does?
        • sly010 16 hours ago
          No batteries!? Go has a huge stable standard library no other language even comes close to. Built in tooling for unit testing, performance testing, debugging, code formatting, package management, etc. And most go binaries can be compiled statically so libc is not even a dependency. Golang is the definition of batteries included.
          • coldtea 13 hours ago
            >Go has a huge stable standard library no other language even comes close to

            Well, Java and Python do.

            • mrsmrtss 1 hour ago
              C# (.NET) would probably be the winner here. Go standard library is rather minimalistic compared to it.
            • walthamstow 12 hours ago
              Yet the first thing most people do before making a HTTP request is pip install requests
              • coldtea 11 hours ago
                Yet, a nicer request wrapper is not the be all end all of batteries, and Python covers a huge spread of libs
            • IshKebab 6 hours ago
              Python has a quite random collection of stuff, and it's often quite low quality and people don't use it anyway. I wouldn't say it is close to Go.

              I haven't used Java for a decade or so but as I recall its standard library was pretty bare bones (similar to Rust).

              Apparently C# has a pretty comprehensive standard library but I've never used it.

              • coldtea 5 hours ago
                >Python has a quite random collection of stuff, and it's often quite low quality and people don't use it anyway. I wouldn't say it is close to Go.

                People who came into Python for ML and Data Science, and just care for their array and ML libs maybe.

                But long time Pythonistas absolute use Python's standard library - and it's hardly "quite low quality". "Batteries Included" is one of the community slogans.

              • hirvi74 4 hours ago
                > Apparently C# has a pretty comprehensive standard library but I've never used it.

                I use C# more days than not. The comprehensive standard lib is impressively large and accomplished everything I need. Third-party libraries is a real pain point though. I haven't looked in sometime, but things like sane PDF libraries, reporting libraries, etc. were severely lacking when I needed them last. As much disdain as I have for Java, I think it is better in that regard.

          • hirvi74 4 hours ago
            Not my words, lol. It's what people say about the language. But thank you, this was the type of answer I was looking for.
          • wiseowise 13 hours ago
            > Go has a huge stable standard library no other language even comes close to.

            Java, C#, Python, Node.

        • coldtea 13 hours ago
          Go has a very full featured standard library.

          It's simple (do you really ask why that's a selling point?)

          It's fast to compile.

          It's fast to run.

          It's good with parallelism.

          It has myriads of examples, and LLMs can pick it up well too.

          It has good backing.

          It has good tooling.

          It's fun.

          It statically compiles to a trivially deployable binary.

          It's excellent at cross compiling.

          It has good adoption.

          • mrsmrtss 2 hours ago
            Fun it is not.

                if err != nil
            
            It's amazing how they managed to design a new language with all the flaws of '90s languages.
          • hirvi74 4 hours ago
            > (do you really ask why that's a selling point?)

            Yes. Assembly languages are simple, but that does not mean they easy to use well.

            Fast to compile? A nice to have but not a requirement of mine. Parallelism is nice, but not something of value in my current project. Perhaps the next though!

            I do like the LLM ease. It's make learning the language n times faster.

            I can get behind the good backing, tooling, fun, and ease to deploy.

            Hmm, I am supposed to be working on a game for a friend soon. I was going to go with C#, but I might play with Go and see how that goes.

        • CodesInChaos 12 hours ago
          1. It has first-class co-routines, so supports high concurrency without having to deal with async bullshit

          2. It produces a dependency-less statically linked binary

          3. Duck typed interfaces give you static typing with minimal ceremony. They are implemented even for types outside your own code base, which is a common pain point in Java or C#.

          4. It compiles quickly

        • intelVISA 11 hours ago
          I really don't like the lang itself but nobody will deny it has a very strong ecosystem and stdlib for handling around 95% of many well-solved problems you are likely to encounter.
        • clintonb 16 hours ago
          I picked Go because it tends to use fewer resources than Node.js, and startup time is quite fast.
          • hirvi74 4 hours ago
            I've never used Node, so I can't judge it. I'm still stuck in the C# world. So, I am curious how Go performs in comparison. I'll need to do some research.

            Though, it was a slap in the face for a lot of C#-ers when Go beat out C# for the Typescript compiler rewrite. I personally do not mind because C# is my Enterprise language, but it's not my favorite language or anything.

        • kansface 5 hours ago
          Golang truly excels at cross compiling.
        • smw 7 hours ago
          goroutines make building scalable network (web) servers trivial and fast. I'm not a big fan of the language, but the runtime is fantastic.
        • enneff 19 hours ago
          For one thing it’s statically typed and has many fewer foot guns than Python, so the llm-produced code is more likely to do what you expect.
          • shepherdjerred 18 hours ago
            Go is statically typed but the type system leaves much to be desired.

            Go’s benefit are primarily around simplicity, readability, and concurrency.

            • coldtea 13 hours ago
              >Go is statically typed but the type system leaves much to be desired.

              Not that much. Looking at Rust or Haskell complexity, I don't really desire it.

          • wiseowise 13 hours ago
            Python has much better type system than Go, I don’t know what you’re on. With Trio it has a better async capabilities too.
        • pylotlight 18 hours ago
          Performance? Second only to rust and other lower level langs. Surely you don't need this spelled out for you...
          • nvader 18 hours ago
            Not just performance, but static typing and prevalent in the training data/easy for LLMs to reason about.

            Of course, your response admits, "second to Rust", which I am guessing is an unspoken question in the grandparent's mind.

          • za3faran 18 hours ago
            Java and C# are there and faster.
            • DeathArrow 15 hours ago
              Yes, but kids these days only consider JS, Python, Rust and Go.
          • hirvi74 18 hours ago
            If performance is the main difference, whatever that means, then basically Go should be reserved for when Rust and other lower level langs cannot be used due to some other constraint? Are we mainly talking about performant Web backends?

            Say I am building some app that I know will be CPU-bound, why choose Go over say... Swift?

            • overfeed 16 hours ago
              > why choose Go over say... Swift?

              Language religious wars are silly: you should choose a language based on your constraints and personal tastes. If there's no clear advantage of one language over another for a given task - then all the options are viable, pick one and get on with solving the problem.

            • coldtea 13 hours ago
              >If performance is the main difference, whatever that means, then basically Go should be reserved for when Rust and other lower level langs cannot be used due to some other constraint?

              Or when performance is the main but not the only difference, and there are many other benefits.

              >Say I am building some app that I know will be CPU-bound, why choose Go over say... Swift?

              Because unless you're building for macOS/iOS, Swift is really a no-go, with lackluster support for other platforms. Plus slow to build and convoluted.

        • DeathArrow 15 hours ago
          >I get that it is allegedly hailed to be a simple language

          That might be its core feature if you do agentic coding.

        • chickenman_98 19 hours ago
          I think that’s sort of the selling point no? It’s really boring. It has like -10 keywords, compiles insanely fast, and has a concurrency model that’s easy to use and read. LLMs are great at using Go tooling to sanity check along the way. It’s easy to write shitty Go but it’s really pleasant to work with if you find those things compelling.
          • khimaros 19 hours ago
            don't you worry about garbage collection?
            • camdenreslink 19 hours ago
              If you were using Python, then probably not.
              • bensyverson 18 hours ago
                haha exactly. I’m coming from Swift, and I don’t want to go back to manually releasing objects like I used to in ObjC, let alone reason about lifetimes.
            • mellow_observer 14 hours ago
              What's the big issue with GC nowadays? It has mattered to me exactly once in decades and it was still manageable anyway by using a more low level style in a hot loop. I see very few usecases where GC actually matters and for those rare few cases it was not like you were using python beforehand anyway
            • coldtea 13 hours ago
              Why the hell would he "worry about garbage collection"? That kind of thing is a cargo cult fear.

              Garbage collection is not an issue for 99% of programs. And for those that it is, there are ways to mitigate the issue (e.g. there are extremely high performance trading system written in Java, where every last sub-millisecond counts).

              Blanket fear of GC reminds me when new programmers learned about how assembly is lower level and can be faster, and wondered why everything is not written in assembly.

      • DeathArrow 15 hours ago
        >Just use Go. LLMs have seen a ton of it, they write it well, it compiles practically instantly, and it has all the advantages of a typed compiled language.

        Or any of the faster typed languages you are most comfortable with, as you might need to look at the code some times. LLMs are great at writing and understanding C# and Java.

        • arw0n 14 hours ago
          Also there are still considerations like domain, team expertise, org ecosystem etc. to consider. I love to use Rust for most things, but now I'm working with an org that primarily has expertise in Java, and I'm not going to rock the boat for barely any reason. Python is also still useful for most ML stuff, and Django is quite a pleasure to work with (although it wouldn't be my first choice).

          The great thing about LLM-assisted coding is that an experienced software engineer can acquire decent familiarity with a language quite quickly. And then has a useful sparring partner for understanding and using the quirks and features of a new language.

          • t0mas88 8 hours ago
            Same here, working with a team that knows Java, so I'm letting Claude write Java.

            If I compare the results to another team that uses Python with Claude I see slightly better results on the Java side. Not because Claude knows that better, but because the tools are more rigid by default which creates more of a self correcting loop for Claude. The Python side has Pydantic, but it's a bit of an afterthought, while in Java you can't skip the type checking.

            In the end you can do the same things on both sides, it's 95% a team/engineering culture difference. So pick the language that the team knows best.

    • gmueckl 20 hours ago
      Training data can't be the whole answer. LLMs are really good at translating to different programming languages. This makes sense, given that they are derived from text translation systems. I'm getting great results in languages with comparatively small bodies of freely available code. The bigger hurdle is usually that LLMs tend to copy common idioms in the target language and if it is an "enterprise-y" language like Java or C#, the amount of useless boilerplate can skyrocket immediately, which creates a real danger that the result grows beyond the usable context window size and the quality suffers.
      • dnautics 17 hours ago
        > Training data can't be the whole answer.

        Absolutely correct. Anthropic showed that 250 examples can "poison" an LLM -- independent of LLM activation count.

      • lanyard-textile 20 hours ago
        Very true.

        I have to steer models hard for C++. They constantly suggest std::variant :P

      • jryio 20 hours ago
        In higher dimensional vector space, yes it can.

        Dimensionality gets bizarre in 1000-D space. Similarity and orthogonality express themselves in strange ways and each dimension codes different semantic meaning.

        Therefore, if the training data is highly consistent you are by definition reducing some complexity and/or encoding better similarity.

        In Go the statement

            result, err := Storage.write(...)
        
        
        Is almost always going to be followed by

            if err != nil { ... }
        
        In a highly dynamic language you may not get

           try { Storage.write() } catch (error) { ... }
        
        Unless explicitly asked for.
        • dnautics 17 hours ago
          It's a little bit old, but challenge you opinions about what matters for LLM agentic coding:

          https://github.com/Tencent-Hunyuan/AutoCodeBenchmark/blob/ma...

        • za3faran 18 hours ago
          > In a highly dynamic language you may not get

          Being dynamic is secondary. A language that uses exceptions for errors does not always need to surround every try with a catch if the code doesn't need to. You have a top level handler that would catch everything.

      • chromacity 20 hours ago
        > LLMs are really good at translating to different programming languages.

        ...for which ample training data is available.

        > This makes sense, given that they are derived from text translation systems.

        ...for languages with ample training data available.

        Yes, LLMs can combine information in novel ways. They are wonderful in many respects. But they make far more mistakes if they can't lean on copious amounts of training data. Invent a toy language, write a spec, and ask them to use it. They will, but they will have a hard time.

        • mbreese 19 hours ago
          I have a language I wrote for processing data pipelines. I’ve used it for years, but I can count the number of users on one hand. I wrote it partially to learn about writing a scripting language, partially because Nextflow didn’t exist yet. I still use it now because it works much better for my way of processing data on HPC clusters.

          The only code that exists on the internet for this is test data and a few docs in the github repo. It’s not wildly different from most scripting languages, from a syntax point of view, but it is definitely niche.

          Both Codex and Claude figured it out real fast from an example script I was debugging. I was amazed at how well they picked up the minor differences between my script and others. This is basically on next to zero training data.

          Would I ask it to produce anything super complex? Definitely not. But I’ve been impressed with how well it handles novel languages for small tasks.

        • lmm 20 hours ago
          That might be an argument for not using a novel homebrew programming language. But it's not an argument against, like, any top-100 or even top-1000 programming language, which will be adequately represented in the training data.
          • ambicapter 19 hours ago
            It is if more training data results in better performance. In which case, GP will continue to use the language that is likely to have the most training data available.
            • lmm 19 hours ago
              > It is if more training data results in better performance.

              Sure. But given the relation with translation systems, it seems far more likely that there are diminishing returns to larger volumes of training data.

        • agentultra 19 hours ago
          They are also good at generating plausible code. The kind that has no obvious bugs in it. I wouldn’t be surprised if humans in the loop over report success with these tools. Combined with decision fatigue… it’s not a good recipe for humans making good decisions.

          An experienced Rust developer is going to be in a better position to drive an agent to generate useful Rust code than a Python programmer with little or no Rust experience. Not sure I agree with the author that everyone should just generate reams of Rust now.

          At least if your get paged at 3am to fix the 300k AI-generated Django blog you’ll have a chance at figuring things out. Good luck to you if Claude is down at the same time. But still better than if it was in Rust if you have no experience with that language.

    • not2b 20 hours ago
      That would matter if we were asking the AI to generate code open-loop: someone probably already wrote something close to what you asked for in Python. But if the agent generates code, tries to compile it, sees the detailed error messages and acts on those messages to refine the code, it's going to produce a higher quality result. rustc produces really good diagnostics. And there's a lot of Rust code online now, even if there's so much more Python and Javascript/Typescript.
      • ambicapter 19 hours ago
        LLMs don't actually semantically parse the error messages. They will generate the most likely sequence resulting from the error message based on their training data, so you're back to the training data argument.
        • not2b 15 hours ago
          They process those error messages in the same way that they process your instructions about what code to generate. It is just more commands.
        • neutronicus 19 hours ago
          Perhaps the training data about what compiler diagnostics mean is particularly semantically rich training data.
        • Tarq0n 17 hours ago
          Of course they do, error messages get tokenized and put into the context window just like anything else. This isn't a Markov chain.
      • hansvm 15 hours ago
        Except the presence of errors, mistakes, contradictions, and doubling-back causes LLMs to have substantially worse output, especially without dedicated sub-agents who have been instructed about that deficiency and know to process that kind of crap into better prompts to insert into a different LLM with pristine, error-free context. Without hard numbers we're both just pissing into the wind, but it's entirely plausible that the higher rate of errors matters more than the fact that those errors are more ergonomic. Anecdotally, my LLM work is a _lot_ more productive when I have it draft the thing in Python and translate it into Rust since it wastes so much time on the tiniest of syntactic mistakes.
    • onlyrealcuzzo 19 hours ago
      I built a programming language, and LLMs can code phenomenally well in it.

      I don't think the training set matters that much, since there's no way they have my language in their training set!

      Programming languages have a lot in common. Python is kind of odd when it comes to languages.

      • zuminator 17 hours ago
        If the training data is basically irrelevant, then an LLM should be able to iteratively improve the programming language it uses, resulting in a custom language optimally designed to maximize its own coding ability. The source code might not even be human readable natively, just translated into pseudocode on an as-needed basis.
        • onlyrealcuzzo 17 hours ago
          > If the training data is basically irrelevant, then an LLM should be able to iteratively improve the programming language it uses, resulting in a custom language optimally designed to maximize its own coding ability.

          I won't be surprised if one day they do.

          At least in their current form, I don't think they can independently design a language that is so much better than other available ones that it makes sense for them to use it.

          There's a very good language for almost every use case already, designing one better than the ones already available is a VERY tall order.

          It's almost like these languages aren't designed by morons, but built by teams of geniuses over a decade instead.

          It's taken me 6 months of heavily steering an LLM to build a language that is not yet even ready for production use.

          Maybe I'm the one slowing the LLM down. But it certainly does not seem that way.

          The key to a good language for them - from my experience - is maximum expression plus minimum global complexity.

          Anything that makes you manage memory lifetimes & memory safety is inherently unfriendly to LLMs - that's globally complex.

          All scripting languages allow spaghetti aliases that let you hack your way into oblivion - and LLMs gladly ride that gravy train to hell.

          Rust excels here, because it prevents the worst and is WAY more expressive than most people think.

          Go has arguably the best runtime ever built, but it's not very expressive, and it still has a lot of problems from C and scripting languages - I don't think these types of languages will be the ones people chose to write code with for LLMs in the future.

    • impulser_ 17 hours ago
      People really need to stop assuming more training data the better. This is not how it works. LLM thrive off consistency.

      Go for example has significantly less training data than Python, but LLMs are the best at it. Why? Go is often written the same. You go from project to project and the code looks all the same. There only a very few ways to write Go.

    • btown 20 hours ago
      Also, every single interpreter error has an entire corpus of StackOverflow-esque fix suggestions alongside it, and the model has been fine-tuned to minimize such errors on the first try. This hasn't been done for more obscure languages. You'll likely take more turns, on average, to get a working output, even if your problem is fully verifiable via test input/outputs - and if it's not verifiable, you don't want the "attention" of the model focused on syntax rather than the solution.
      • ruszki 18 hours ago
        There is no "entire corpus of StackOverflow-esque fix suggestions" about anything which is newer than a few years. I'm using cutting edge Android frameworks all the time. Yet, LLMs fix problems even when Google/Kagi has zero answers, which happens more often than not. We are way over this requirement.

        I especially found that there is no difference between languages based on that. All generated code's architecture is terrible, if you don't actively manually maintain them all the time. If you don't have a few 10s of thousands of finely architected code already in your codebase, from which they can understand how it should be really done. And the reason, I think, is quite simple: the average code on the internet - regardless of market penetration of the given language - is simply bad.

        • btown 3 hours ago
          Well, for the time since then, the LLM providers have a corpus of every code suggestion they made to users and whether it resulted in positive or negative sentiment afterwards - which is arguably even more powerful. There's still some level of RLHF that's more prominent for popular languages.

          As you noted, of course, this doesn't apply to architecture. But that's also why I try to make sessions as turn-efficient as possible - you need every bit of context to get it to solve its own architectural rabbit holes.

    • robot-wrangler 20 hours ago
      > I could write in brainfuck with ai, but I presume, wouldn’t get the same results than if going with python.

      https://esolang-bench.vercel.app/

      • Tarq0n 17 hours ago
        The conclusions seem overly broad. Just because these languages are Turing complete doesn't mean they aren't massively hampered by expressiveness and amount of batteries included. To attribute all of this to training data memorization is premature.
        • robot-wrangler 16 hours ago
          Oh this is a very damning paper. Using simple languages from their definitions alone is a great proxy for studying truly out-of-distribution reasoning. Also just for following simple rules/instructions correctly, because a simple enough language is practically just a grammar. This paper is terrible for anyone who wants to make the case that models can do those things well.

          To the extent today's AI can reason, add this to the pile of evidence that you definitely need a harness. Counter to what you hear.. that seems true for SOTA and frontier, not just toy models. Lots of people were saying many years ago someone should test exactly this, because it's obvious. Someone at megacorp probably did try and decided not to publish because they thought it was bad optics.

      • _boffin_ 19 hours ago
        and this sums it up right here.
    • tengbretson 19 hours ago
      Admittedly, I have very little experience with LLM-assisted Python. However, based on the severe degradation in output quality I have seen from an LLM working with plain JavaScript as opposed to TypeScript, I can't imagine choosing to start a project in Python at the moment.
      • fwip 17 hours ago
        It does seem like LLMs write better Python when told to use type annotations, especially when coupled with a linter.
        • aix1 16 hours ago
          I've been coding professionally in Python for about twenty years (alongside, at different times, a dozen or so other languages).

          I find that Claude can write great modern Python more or less out of the box, with minimal style guidance from me. I do have to nudge it from time to time to not do silly things, but overall it's really rather good.

    • jryio 20 hours ago
      I wrote about the meta thesis of programming languages in the training data here

      https://jry.io/writing/use-boring-languages-with-llms/

      • _boffin_ 20 hours ago
        Please distill instead of having me navigate off site. Include link for additional info.

        edit: side -> site

    • krzyk 12 hours ago
      With AI it is important to catch errors/hallucinations early, static typing helps with that.

      So languages with dynamic typing might hide some errors until runtime, static typing one could catch that during compilation.

      With dynamic ones you need way more tests to cover some of the scenarios that compiler does for others.

      And there is significant amount of code written "for ages" in languages that were there longer, like C, C++, Java (yes, I know that python is quite old, older than Java - 1991).

    • ocschwar 18 hours ago
      Seems to me these LLMs have a critical mass of Python training data and Rust training data, so there's no advantage for Python there.

      So as the article points out, an iterative process that catches the mistakes at compile time is much more suited for an AI than one that catches them at runtime.

    • Eridrus 19 hours ago
      The LLMs are actually worse at generating Python than other langs, hypothesized due to quality of training data lol.

      I still read the generated code, so I'm not quite willing to give up on Python yet though.

    • mountainriver 19 hours ago
      I loved from writing all my code with LLMs from Python to Rust. I’ve seen absolutely no difference, most of the time I couldn’t even tell you which it’s writing in.

      My programs are faster and more reliable than they’ve ever been.

    • aaa_aaa 12 hours ago
      For some people reducing infra costs matter. Python is very very slow, even if it uses native libs.
    • imron 13 hours ago
      Large volumes of training data is a blessing and a curse, especially when you consider who wrote it.
    • osigurdson 18 hours ago
      I wouldn't say I get worse results with Go than I do with Python.
    • markboo 18 hours ago
      that's right, we dont need to care about a lang, same as we dont care about Map when FSD promise its already end to end optimal one.
    • bluegatty 19 hours ago
      There's enough training data on the other langs.
    • te_chris 14 hours ago
      1) the models do generalise so concepts translate 2) languages with more opinionated semantics and a better, more coherent community seem to be better. Python is a broad shitshow with multiple ways to achieve the same thing. Elixir is tight and focused. Claude is much better at elixir.
    • bmitc 18 hours ago
      > Read the first few comments and surprised I didn’t see it, but training data. The voluminous amount of Python in the training data.

      That's actually part of the point. Almost no one writes types for Python and has complete type compliance. So all that training data is people just yoloing Python, writing a bunch of poor code in it.

      I honestly can't believe any experienced software engineer would decide to build systems in Python these days.

    • th1sisoldnews 19 hours ago
      [dead]
    • faangguyindia 20 hours ago
      No if that mattered you'd write everything in html and css. Because that has way more training data.
      • weird-eye-issue 20 hours ago
        Those are not programming languages.
        • goatlover 17 hours ago
          WASM then.
          • weird-eye-issue 16 hours ago
            That's more of a compilation target than a programming language and I don't really see the relevancy...
    • gerdesj 19 hours ago
      "I could write in brainfuck with ai"

      Well, go on and do the experiment! Perhaps LLMs can right code as well in BF as Python but I don't recommend it because hallucinations are really hard to notice in BF.

      If you are going to worry about high level computer languages and AI, you are going to have to start with getting to grips with machine code and assemblers and that. Once you know how say some Python code ends up being processed by your laptop CPU(s), then you will know when BF might be best!

  • luodaint 6 hours ago
    But under this frame, it appears that the developer's task involves prompt engineering. This is not the case.

    Even if an agent generates 90% of the code, each and every diff is going to be in my review queue. Code readability of Python isn't an advantage during write; it's an advantage while reviewing. As an agent generates a piece of code, I will have to read the code, comprehend the code, and determine whether it does what I want. This is the other 10% of the task, and it's the crucial one.

    Python is, thus, clearly superior to other languages in terms of ease of review.

    • throwforfeds 5 hours ago
      > Code readability of Python isn't an advantage during write; it's an advantage while reviewing.

      This is completely subjective though. I personally find that Python's lack of static types makes code very difficult to reason about. Yes, some devs will write decent comments and name things in a way that's easier to read, but most devs are lazy (myself included) and things get out of hand quickly.

      But this is also a subjective opinion, and you could argue that I feel this way because I spend most of my time in TypeScript, Go, and Rust.

      • BobbyJo 5 hours ago
        I would go even farther and say that static types are a tool designed specifically for a code reader.

        When you're writing the code, you know what the types are, as you literally just created/wired/whatever them. Static types become a benefit only when you visit code without that fresh context. For instance, third party libraries are far easier to use when the interfaces are typed.

      • rhdunn 5 hours ago
        Python has type annotations now [1] that type checkers, IDEs, etc. can use.

        [1] https://docs.python.org/3/library/typing.htmlhttps://docs.py...

        • nrub 3 hours ago
          Yes, but: a) they're a second class citizen, not guaranteed to be used in whatever niche of the python ecosystem you find yourself in and there's already an n+1 problem with multiple type checker written by third parties, rather than having 1st class language support tool that's consistent. You're not going to get it by default, you're usually going to have to do some configuration (and maybe bike shedding) to get it working; b) they completely negate the idea of python being "easy to read", your code is now littered with `if TYPE_CHECKING:`, `Literal`, `TypeAliasType` and any number of workarounds needed to make your hints work out. Unfortunately the syntax was just not designed with typing in mind, and I think it shows; c) the idea of "hinting" rather than enforced type checking means you have no guarantees that a type is what you need it to be, you have to do a lot of boundary work to make sure the edges of your code are coercing things to the right type. While I love pydantic and find it to be an excellent library, to me it's the kind of code smell you get in languages without strong typing. Also you're going to get a lot of spurious type errors along this path as well;

          I will gladly use python's type hints, it's a whole lot better than nothing (IMHO better than typescript), but in it's current form it will always fall short of a language that was designed with strong typing in mind.

        • throwforfeds 5 hours ago
          For sure, and if I'd ever need to use Python I'd want to strictly enforce that across my team (pre-commit hooks or whatever).
      • frollogaston 5 hours ago
        When you have types, you end up having to look up what every type means anyway because the names are meaningless.
    • deepsun 5 hours ago
      It is harder to review Python:

      1. Indentation is harder to see in diffs.

      2. Explicit types give context, and if a project guidelines do not enforce type hints, as many don't, then it's hard to see what happens there.

      3. Monkey patching and operator override -- I mostly stumbled upon that with "smart" types like ORM objects. Combined with 2. makes it very hard to review.

      So I almost always had to download the change and review with IDE help. So it's not just code review anymore, it's manual testing.

    • spprashant 6 hours ago
      > Python is, thus, clearly superior to other languages in terms of ease of review.

      My experience has not been this. Dynamic languages make it harder to figure out things locally, unless someone has done the hard work of adding type hints.

      • giancarlostoro 6 hours ago
        Python has had type hints for like... Oh 11 years now. Just like C# has introduced var and quicker ways to write less, to the point it almost looks like JavaScript sometimes, but its because we can infer types pretty easily now. Rust has a nice system as well, forcing method signatures to declare types, everything is easier to infer from this.

        Introduced in 3.5 (2015)

        https://docs.python.org/3/library/typing.html

        • spprashant 5 hours ago
          Having type hints as a feature is not the same as using them.
        • deepsun 6 hours ago
          Yet many projects don't use them.

          Sometimes they are wrong (as they are more like a comment than a compiler directive).

          My first task in any project was to figure out why devs don't have error highlighting on for bad types (often it's "it was red so we turned it off"), but good luck forcing others who don't do type hunting to start doing it when "it slows us down".

          • giancarlostoro 5 hours ago
            I guess I'm spoiled in that I've done both Python and C# throughout my carreer.
    • _verandaguy 5 hours ago
      AI-unrelated tangent, but I think it's pertinent to your comment.

      I come from a heavily Python background, professionally. I spent the entire first decade-and-change of my career using almost exclusively Python; I know it about as well as a person reasonably can (outside of scientific and ML Python, which I just never got interested in, but that's beside the point).

      A year and a half ago I got a job doing Rust. At a surface level, it's about as far as you can get from Python in terms of ease of readability, but after 18 months I'm really reconsidering some of my points of view on the matter.

      "Explicit is better than implicit," for example, is something I still strongly agree with, but my definition of "explicit" has shifted a lot in the past year. Seeing which guarantees are provided through mandatory, explicit, strong typing saves a lot of time over tracking down guarantees in MRs while reviewing Python code. If I see a signature as an `Arc<dyn AudioInterface>`, for example, I immediately know that:

      - It's thread-safe and memory-managed using reference counting (because `Arc` provides those guarantees);

      - It's a type-erased object but is guaranteed to provide all the functionality from the `AudioInterface` trait (which, let's say, could be a supertrait of `AudioInput` and `AudioOutput` -- so it provides both of those);

      - It uses runtime dispatching (since it's a `dyn` rather than a generic/`impl T` where `T: AudioInterface`)

      I can choose to operate on it by reference with all the caveats that entails, or decide to either `Copy` or `Clone` it, depending on whether that's available for that type and if I can stomach the runtime cost.

      All that to say -- Rust doesn't suck to review, relative to Python, in the long run. At first, yes, holy crap, it's such a huge cliff, and I can appreciate your point of view... but there's something to be said about having all this information surfaced as part of the language's syntax and semantics.

      Python still has a special place in my heart, and I'd still use it over anything else if Rust isn't an option, but to echo a popular sentiment from other people who've made this migration, I don't know if I can go back to handwaving away whether or not something'll cause an allocation :)

    • d0100 5 hours ago
      > Python is, thus, clearly superior to other languages in terms of ease of review.

      Do we get visual comparisons along with this bold claim?

    • Frost1x 6 hours ago
      Is it though? You assume the abstractions in Python are battle tested and you understand them. Usually people are relying on arbitrary libraries so unless you’re constraining libraries and those libraries have good review processes, it won’t be long until high level functions you’re reading are generated by LLMs to, so to review your LLMs use of other LLM generated functions you have to drop down a few levels and review at that level.

      At some point that becomes less sustainable and looking at something with less abstraction assures you’re at least looking at a baseline source of truth, even if the volume is massive.

      There’s going to be a whole world in the knowledge economy, not just software but everywhere, around validation and sign off of information that we’ve taken for granted as a cost prohibitive process where only the best options make it to high levels of function and maturity.

    • djb_hackernews 6 hours ago
      the trend is AI also does the code review. Too many anecdotes and studies showing AI is a better code review that a human and the models are just going to get better.

      Whether we get better results if AI reviews Python or Rust I'm not sure. But I suspect Rust will win out as the training data likely has more content around Rust correctness and language usage than Python does.

      • bloppe 6 hours ago
        You must have a low bar for human code review. I've seen that in practice too. But I've also been on teams that took code review very seriously, and frontier AI really doesn't come close to a good human code reviewer imo
        • Daishiman 2 hours ago
          They're complementary. AI reviewers are bad at spotting inappropriate architecture patterns and unnecessary verbosity, but they're very good at identifying various types of complex logic bugs and detecting mismatches between code docs and implementation. They add substantial value.
      • cerved 5 hours ago
        if you want to reason about language correctness you are better off using linters, compilers and things like fuzzing

        > the trend is AI also does the code review

        please no. Keep at least four eyes on all code you ship

    • bilater 4 hours ago
      except this will go away. we will likely reach a point sooner rather than later (I think 2027) where it will be infeasible for humans to review the code. This will happen at startups first rather than big corps obviously and the engineers who design systems (dark factories) fully leaning into this will have a huge advantage. And yes there are exceptions to this and a play on the other side but the this is where the buck is going. at that point even Assembly becomes interesting.
      • Daishiman 2 hours ago
        This has already been happening since the last 8 months according to many people who work in startups with greenfield code.
  • slashdev 5 hours ago
    The static vs dynamic language debate is decisively over and static has won. I called this out back in 2023, and I've only become more convinced since then.

    Statically typed languages are easier for the reader because you can see the types and quickly jump to their definitions (or even just hover over them in some IDEs).

    They're easier for the AI because they provide natural guardrails and feedback to guide it, as well as much more confidence to the programmer that the code does what it is supposed to. Rust even provides strong guarantees about correctness across threads, which is so helpful to multi-threaded code.

    The fact that they run faster and use less memory is just icing on the cake.

    Even just last year the AI could not handle the borrow checker well. Today I think it is better than me at handling tricky lifetime issues that ocassionally happen in multi-threaded Tokio code. I've been doing almost 100% Rust development over the last 3 years, and the experience is now very good. I don't write code by hand any more, nor do any of the 50 engineers where I work.

    I imagine it does quite well with Go, since it's such a simple language. And Go is very readable, and compiles very fast. If you can afford the GC in your problem domain, it might be a good fit. You would have to be so careful with introducing concurrency, because it would be so easy to introduce race conditions that both the AI and human reviewer might miss. I haven't tried to use Go in anger yet with LLMs, so this is all just speculation.

    • adam_arthur 5 hours ago
      Strong typing has clearly won.

      However, verbose typing is likely a negative for LLMs.

      Algorithms written in "pseudo-code", aka a higher level language without type information, are far more readable to a human, and thus likely an LLM too.

      In regards to control flow and general concept of what code is doing, types provide very little info over well named variables. In fact they often impair understanding by breaking up logic with implementation details.

      I'd be curious to see some experiments around this, but I'd guess strongly typed languages where the type information is mostly hidden/inferred would have better generation accuracy from a semantics perspective (and likely worse from a type safety perspective, but can be corrected on compile/retry)

      • MaxBarraclough 1 hour ago
        Strong typing is not a synonym for static typing, it refers to a different aspect of type safety.

        Static typing is, roughly, where variables and expressions have fixed types that can be determined ahead of execution. Strong typing means the language doesn't offer implicit type conversions. Python is dynamically typed, i.e. not statically typed, and strongly typed. (Ignoring its type annotations feature, of course.)

      • Geniuzz 4 hours ago
        > Algorithms written in "pseudo-code", aka a higher level language without type information, are far more readable to a human, and thus likely an LLM too.

        What’s the basis of this claim? There are many many more lines of code LLM’s are trained versus pseudo-code.

        Also I agree, anecdotally the self-correction is key benefit from static types. If there is a mistake, it is caught at compile time and not at runtime.

        • adam_arthur 3 hours ago
          It seems clear to me from first principles.

          Humans are trained on human language. LLMs are trained on human language.

          Thus something that is easier for a human to understand is likely easier for an LLM to understand.

          That higher level language with well named variables reads more comprehensibly than code:VERB with:PREPOSITION types:NOUN, intermixed:ADJECTIVE, stems:VERB from:PREPOSITION first:ADJECTIVE principles:NOUN too:ADVERB

          • ethanlipson 3 hours ago
            For models as complex as these I'm not confident we can apply arguments from first principles; we could just as easily argue that type information is helpful, from first principles. What is much more useful is empirical evidence, and AutoCodeBench [1] found that LLMs are most proficient in Elixir (dynamic) followed by Kotlin (static), with Rust and PHP at the bottom. So it would seem like, as of publication, typing style doesn't really matter!

            [1] https://autocodebench.github.io/

          • ModernMech 3 hours ago
            As far as the AI is concerned, it's more like

            Buffalo buffalo Buffalo buffalo buffalo buffalo Buffalo buffalo

            versus

            Buffalo:PN buffalo:N Buffalo:PN buffalo:N buffalo:V buffalo:V Buffalo:PN buffalo:N

            I think the second one makes much more sense.

            • adam_arthur 3 hours ago
              In the rare case that all your concepts use the exact same descriptive word, you are probably right!

              The majority of the time you can infer the type from reading well written code (to the extent that the shape of the type matters in the context of that piece of code)

              • ModernMech 2 hours ago
                If the type can be inferred by the reader it should be inferred by the type system and at least be available to the LLM as a query. But we're also talking about dynamic languages in which type cannot be inferred until runtime. What's the type of x?

                x = y + z

                Well that depends on the types of y and z, which themselves may depend on the types of other operands, which themselves may not be known until the program actually runs. All that inference takes a lot of thinking, which takes tokens, which cost money. Why not just write the types down? Although we call these things "inference engines" they're really pattern matching explicit tokens, so it's better to actually write down the types so they can be pattern matched than to figure them out at inference time.

      • cocophone 3 hours ago
        I think Java shows that too much naming is a horrible idea even if there is a good type layer. It's exceptionally easy to have homonyms and other problems that feed errors in LLMs and if every mix of nonsense runs that's all the worse. Add to that attempts to put many English words together and there are no way points left. Everything is an exhausting essay by Dickens where two very long sentences are subtly different (to look at but not in results.)
      • vmg12 4 hours ago
        > types provide very little info over well named variables

        Types guarantee invariants at compile time, adding type info to a variable name is just a prayer that the next human or robot will enforce the invariants with respect to that type when it matters. This is like saying you don't need a saw stop because you should just avoid sticking your hand in the saw blade.

      • mvdtnz 2 hours ago
        This is a lot of speculation with absolutely nothing backing it.
    • prepend 5 hours ago
      If static has won, why are dynamic languages more popular now (even since 2023).

      Comically, I’ve witnessed people say this since the 90s.

      For me, I don’t care about static because dynamic is easier. For the very few conditions where it matters, I’ll use static. Otherwise I like the simplicity of dynamic languages, especially python. IDEs provide support and jump to definitions in dynamic languages, too.

      • usefulcat 3 hours ago
        I mostly use C++ at work, but I love ruby for its expressiveness and use it all the time for small to medium sized scripts where performance doesn't really matter.

        However I've definitely noticed that the larger a ruby program gets, the more likely I am to manually add type checks. Beyond a certain size I simply can't fit everything in my head at once. Even though these checks are still done at run time, debugging is much easier when I can find out ASAP when something is not what I expected it to be.

        People often say "that's what tests are for!". But if I'm spending time writing tests that verify the types are correct, I see that as a waste of my time because that's exactly the kind of thing that a compiler could do for me in a statically typed language.

      • winwang 5 hours ago
        I think most people agree with you -- that's why. Also because I'd say most programmers don't care much about maintainability or quality.

        I personally find that AI writes better Scala than Python.

        • frollogaston 5 hours ago
          Before AI, I used to say types reduce quality by wasting dev time that could've been spent on testing. They may also encourage overly complex code.

          With AI, I don't know. If we're forced to use types, the AI does that work for me, but that added verbosity can't be good for it.

          • lanstin 2 hours ago
            It's sort of an open question how LLM authored code will stand up over the years/decades. It's a lot of code, but maybe the capabilities for reading lots of code will also improve, or maybe the way human shops can snarl up in giant balls of mud that are terrifying to work on, the rate of change for a given system authored by LLMs will show an S-curve shape rather than staying responsive to changing conditions or whatever.
          • xp84 3 hours ago
            > could’ve been spent on testing

            To me, this argument sounds similar to “making salads to eat reduces health because they waste time that could’ve been spent on working out” - it assumes the time savings will be spent on working out and not on sitting on the couch.

            In the case of SWE, any time saved will always be spent on “2 more features we think we can ship this sprint if we deprioritize these pesky ‘additional tests’ tickets - don’t worry, we’ll circle back to those next sprint of course”

          • sophacles 3 hours ago
            When I was working on large python code bases, a full 50% of my time was spent dealing with tests failing because:

            * someone assumed duck typing where it wasn't or the inverse. Or changed the assumed interface of a duck.

            * somewhere doesn't handle None properly even though it's a valid agrument.

            * making sure every function properly checked that the input parameters were valid and generated a meaningful error message

            * making sure side effects of the ducks and the meta-bs didn't break other things

            AKA all type related nonsense.

            With go, and even more-so rust, the time the compiler saves me by obviating all that type related testing is far larger than the time spend dealing with the type related testing. Even when you factor in the extra time twiddling with types adds to the coding. And don't get me started with the whole "deal with type bullshit in dynamic languages" mess that occurs when a bug slips through into prod....

      • tofuahdude 3 hours ago
        I was on team dynamic for a long time and have moved to team static.

        For any long-lived code base, dynamic piles up invisible problems over time.

        It's great for short-lived throwaway stuff, but as soon as you know you'll be maintaining a large code base for a long time, the "easier" part of dynamic actually becomes harder than just spelling stuff out.

        It's obviously a trade-off and not everyone agrees, but that's my personal experience having run large eng teams for both types.

      • svachalek 4 hours ago
        Programmers, at least the mass market programmers, have optimized for ease of writing code for many decades. It's a natural urge, but in the long run for successful applications, it's more important that code is readable and maintainable. As the years go by the initial cost of tapping out the code becomes more and more negligible.
    • jghn 5 hours ago
      > The static vs dynamic language debate is decisively over and static has won

      I wouldn't be so fast. It wasn't that long ago that the dynamic zealots were declaring victory. And before that the static zealots. And before that the dynamic zealots. Going back decades.

      • slashdev 5 hours ago
        I was there, I was one of the dynamic zealots.

        But at least I can't imagine how this trend reverses course now.

        • jghn 5 hours ago
          > I can't imagine how this trend reverses course now.

          That's exactly what those/you dynamic zealots were telling people like me 15 years ago :-)

          But yes, I largely agree with you. My $0.02 is that the larger pendulum over the decades is a trend towards static but being less and less visible to the end user, increasing the static typing while reducing the boilerplate and overhead on the part of the developer. Think things like type inference and the sort. Even as a static typing fan I don't miss the days when literally every variable needed an explicit type annotation.

        • WillPostForFood 4 hours ago
          Could you have imagined the trend reversing when you were a dynamic zealot? Maybe the lesson is that it really isn't that important. As the pendulum swings back and forth between trends, software doesn't change that much. It is just a bunch of small trade-offs.
      • BurningFrog 4 hours ago
        That kind of back-and-forth dynamic ideally ends up with something combining the best parts of both approaches.

        Is anything like that happening?

        • jghn 3 hours ago
          I agree and I believe that's what has been happening. In another post I mentioned an increase of type inference & related technologies. For instance, one annotates the type signatures for inputs & outputs of a function as well as any tricky constructs for readability, but doesn't bother with "int x = 5".

          Reduces manual boilerplate and visual noise while retaining static typing semantics.

    • arikrahman 4 hours ago
      Hard disagree, LLM's benefit from jacking in to the powerful nREPL dynamic languages in the Lisp family like Clojure, letting the agent manipulate the code in unprecedented ways.
      • causal 3 hours ago
        "manipulate the code in unprecedented ways" is pretty vague and of uncertain value, can you elaborate?
        • iLemming 1 hour ago
          You need to understand the basic principles of how Lisp REPL operates. Simple example - if you're building a web scraper in Clojure, you can connect to the browser and "poke" through elements interactively, without reloading, without compiling, without losing the state.

          Now imagine the same principle works with backend services, e.g. we've enabled nrepl endpoint in our staging k8s service, we can modify the behavior dynamically, like adding a new route, for that we'd just need to connect to the REPL, write something like `(POST "/v1/new-effing-route" request ...`, eval it and voila. We don't have to re-deploy, recompile, even save that code - it would just work, like magic.

          Now imagine giving this ability to an LLM. It won't have to guess, it won't have to go into write/compile/run/restore-the-state/try loop - it knows what's available, what can affect the behavior of the system, etc. It works surprisingly well and saves tons of time and tokens. Kids who have not tried that, have zero idea how great that is.

    • atomicnumber3 5 hours ago
      I find it does not do very well with go at all. I speculate that it's partly because go is going to be a language you find a lot of concurrent programming happening in. And sure enough, I find even the best claude is nearly useless at anything beyond copy-pasting examples out of the docs for goroutines.

      My own experience with agents, I'd summarize as "the more the world model (which the LLM does not have) is not concretely represented by the text, the worse LLMs are at it."

      So it's _great_ at HTML, CSS, markdown, and most cursory-inspected English. Good at javascript. OK at most languages. Then very bad at concurrent programming and closely-inspected English.

      I also don't think your top-line conclusion is right at all. I'm quite the opposite opinion. The types "working out" does not actually give me hardly any conviction that the code actually works. And notably, LLMs seem good at making types work out (they're in the text!) but then still have code that's not actually at all right (for the world model).

      I also find that types are not worth the often COPIOUS amounts of boilerplate that comes with them. Some of the worst code I've seen is using reflection to make something happen that would otherwise barely be metaprogramming in Python or Ruby.

      But that's not to say types are useless. I just think rigorous static typing is not worth it. My current favorite way to program is Python, with an enthusiastic use of type hints, enforced by a good type checker (pyright). It gets you 99% of the benefits of traditional static typing, but you can also just tell the type checker to just look the other way for a moment if you're going to commit a dynamic typing.

      • greenail 4 hours ago
        my experience with elixir is that if you tell the LLM "let it crash" it is productive. If you don't tell it this it hides errors and writes crappy tests. So, maybe it is go and not concurrency programming.
    • theYipster 4 hours ago
      Agree 100%.

      In the early days (before Claude Code mastered Rust,) I would get into this annoying pattern where Claude used different names for variables between tests and implementation, get confused, and then more times than not, would change the implementation to match the test (which was not written first--was not doing TDD and thus not the behavior I wanted.)

      Static languages prevent that. I've had great success with Claude writing Rust, and I think it's an excellent language for LLMs not just for low level work, but for production-grade code of all types (I see rust as better aligned to compete with C++, Java, and C#.)

      I've also had great success with Claude writing C#. Using Claude, I've built C#/.Net in Linux, deployed in Windows (via Visual Studio) with Claude Code running in WSL, and it's been a great experience all around.

    • Quitschquat 1 hour ago
      > I called this out back in 2023

      People have been "calling this out" for decades. Yet the most productive languages are still dynamic/strongly typed.

    • linuxftw 4 hours ago
      LLMs are amazing at golang. They seem to have great training in the k8s world, so writing custom controllers and operators takes minutes instead of days now.
    • TZubiri 4 hours ago
      oh yeah, static has won so much that people are now programming with a language where every possible string forms valid programs.
  • fbrncci 18 hours ago
    Why Python? Because I have written it for 10+ years, know how to debug it and I can smell it within 10 seconds of the agent writing code if it does something that is going to end in a huge foot gun. With any other language, not so much; I would need to relearn a lot. So I am going to be preferring python; where even with the speed that AI crams out code, I still feel somewhat in control. If I did this with Go or Rust, then it would feel more like "vibecoding" than AI assisted programming, just yolo the whole product.
    • _waqas_ali_ 15 hours ago
      I started writing rust in this agentic era and all my prior experience with other languages still carries over and helps me spot code smell and bad architecture.

      I had to learn the memory safety bits because I had no idea “what’s right” but rest of it was smooth.

      Syntax fades away, you get to focus on higher level stuff and end up exploring new pathways; give it a try, you might be pleasantly surprised how much of your experience is transferable.

    • bambax 14 hours ago
      Exactly that. Plus I need to be able to make adjustments here and there without the whole thing collapsing on me.

      If you know Rust inside and out (if, as one example in TFA, you co-wrote The Rust Programming Language!) then sure, why not Rust?

      But if not, it would be unwise.

      That said, I use AI to write small C utilities that compile and run on any Windows version starting with Vista (which neither Go nor Rust can do). Yet I'm not a C programmer; but I can read and adjust it when needed, and the whole thing does work.

    • do_anh_tu 17 hours ago
      This is what I experienced as well, I can smell BS from AI generated code right from few lines it wrote in Python, so that why I keep using Python for most of my projects.
  • niek_pas 1 day ago
    Bit off topic but why in the world are people still posting on medium? The reading experience is abhorrent; I couldn’t even finish reading this article before a full screen popup literally blocked the sentence I was reading.

    Is there some incentive I’m not seeing?

    • xrd 22 hours ago
      They have made an honest attempt to pay writers. It's a different model than substack, but that's why.

      I look at it the same way I look at pay walls for newspapers. I don't like them but I understand why they are there.

      • raincole 13 hours ago
        Which is why it failed though. It turns out people won't pay one dollar to read an article like "If AI writes your code, why use Python?"

        The situation is very unfortunate. We had perhaps once-in-a-lifetime chance to solve micropayment but we fucked up (crypto).

        • tommit 6 hours ago
          yup, I still wonder if BAT was onto something. loved the idea, never took off. oh well
    • iLemming 21 hours ago
      > The reading experience is abhorrent

      Nothing you read in the browser can provide ultimately great and hands-down the best reading experience equally for everybody - the modern web model is inherently at odds with that. A plain HTML page with no CSS is a near-perfect reading experience. The problem is that almost nobody ships that, because the web also became a publishing platform where authors compete for attention. A plain-text protocol under user control is closer to "best reading experience for everybody". The web could be that. It mostly isn't.

      I stopped trying to read long articles in the browser. Why would I do that, if I can easily extract all the relevant, plain text (and even structured one) and read it in my editor instead? Where I have control over fonts, colors, navigation, etc. The browser is a delivery mechanism, not a reading environment. Treating it as one is a habit, not a necessity.

      Long ago I stopped trying to type anything longer than three words anywhere but my editor. Of course, why wouldn't I? It already has everything I need - spellchecking, thesaurus, etymology lookup, translation, access to all my notes, LLM integration, etc. Try it one day - it's enormously liberating experience. And then maybe you'd stop reading long texts in the browser as well.

      • autoexec 19 hours ago
        > A plain HTML page with no CSS is a near-perfect reading experience. The problem is that almost nobody ships that, because the web also became a publishing platform where authors compete for attention.

        They don't ship it because of greed. They only want your attention because of greed. They only infest their website with ads because of greed.

        > The browser is a delivery mechanism,

        http is a delivery mechanism. The browser is a user agent. It's supposed to display content according to the preferences of the user. If your browser isn't doing that for you it's time to find a new browser or beat the one you have into submission until it behaves. "reader mode" is a useful compromise.

        • iLemming 19 hours ago
          > It's supposed to display content according to the preferences of the user.

          That's right, the original idea was exactly about that, but like I said - in practice that is no longer a thing.

          Using the editor for reading any content is enormously underrated. Check this out - this entire thread opens in my editor as an outline with nested structure. Meaning that all the regular outline operations are available to me - folding, imenu (interactive TOC), narrowing, quick search, contextual search, pattern-based search, sparse-tree search.

          Extracting all the URLs on the page while ignoring HN-internal ones is a single keypress for me - there's a link to a YT video - I can watch it, controlling the playback directly from my editor, I can extract transcript and summarize it with an LLM request - all without opening new tabs, without switching focus.

          I can narrow on the sub-thread, or select a region and export only that part to a pdf, gfm, html or LaTeX. The possibilities are virtually unlimited. A web browser - even with three hundred different extensions won't let me have complete and utter control over plain text - it's just not designed for anything like that.

          • polaris64 13 hours ago
            I'm assuming you use Emacs? Are you using a special "hacker news mode" or something more generic?
            • iLemming 3 hours ago
              HN threads is probably not the best example because the site is pretty readable already. But it's not that difficult to fetch a thread and render it in the Org-mode outline format. nhreader.el¹ does that. For reading articles I just use eww. it has (eww-readable) that removes all the fluff like banners. The trade-off that eww (by design) doesn't do any javascript. That makes it difficult to use with websites with client-site rendering (React, et al.). For that, I have a little automation elisp² that uses OSA (JXA) and extracts the rendered content off the page. I need to figure something similar for Linux, but it's not so straightforward, the only way I know is to run the browser with the debugger port.

              ¹ https://github.com/thanhvg/emacs-hnreader

              ² https://github.com/agzam/.doom.d/blob/main/modules/custom/we...

          • uxcolumbo 13 hours ago
            Can you share your setup how to achieve what you described? I'm curious.
            • iLemming 2 hours ago
              see the adjacent thread
      • someguyiguess 19 hours ago
        > Why would I do that, if I can easily extract all the relevant, plain text (and even structured one) and read it in my editor instead?

        Because that’s an enormous pain in the ass. Not scalable at all.

        • itsdavesanders 5 hours ago
          Its pretty easy with a system like Readwise. Yes, that's ANOTHER system, but its one system to quickly just add articles like these to an inbox and read them another time, in plain text.

          Of course, it doesn't work 100% and certain sites are hostile to it and do stupid javascript tricks "for the views".

          Mostly, I use it to put it on a reading list later, and to get around really, really abusive ad driven sites.

          • iLemming 2 hours ago
            > Its pretty easy

            100%. One can use mozilla/readability to extract the content. Even if you think that would require some effort, think about it - you have to do it ONLY once and never deal with that kind of annoyance EVER again. It really baffles me seeing devs complaining about shit like that. Why? Why won't they figure out a better way? You're a friggin' programmer - computers have to obey your will. You spend your lifetime staring at the screen, reading and editing text. Why not do it on your own terms? Even if it takes some effort, why choose to be henpecked by someone else's rules FOREVER?

        • iLemming 19 hours ago
          I beg to differ. You clearly misinterpret what I'm talking about. Please expand on "scalable", what do you mean by that?
      • kode-targz 9 hours ago
        do you use emacs?
        • iLemming 3 hours ago
          I do, but nothing stopping anyone from doing the same thing with nvim or vscode. I'm pretty sure, for vscode there probably extensions - it's already built atop a browser.
    • nickff 1 day ago
      It seems like it's just the latest evolution of the writer-friendly blogging platform; easier than Wordpress to package into a newsletter, and also easier to monetize with a paid tier.
      • ciupicri 22 hours ago
        But don't we have AI to deal with the complexity of Wordpress? :-)
        • DonHopkins 22 hours ago
          Insofar as AI is great at accidentally deleting your production and backup Wordpress databases, and forcing you to start from scratch with something else.
    • kelvinjps10 19 hours ago
    • chneu 1 day ago
      My best guess is momentum. Some people are very, very brand loyal and have to do things in relation to what/how others do things.

      In reality it doesn't matter where something is posted, just give us a url, but some people don't operate that way.

    • odie5533 12 hours ago
      It's a free, permanent host for your blog articles with a built-in community and monetization layer. There's only so many free hosts out there that I'd be confident will be around in 5 years, and Medium is one of them.
    • dsmurrell 1 day ago
      Yep, Medium was free and everyone donated content... then it put up reading paywalls and conned everyone, I'm also surprised when I see people writing on there.
  • oxag3n 20 hours ago
    If AI writes your articles, why use brain?
    • lbrito 18 hours ago
      You sneer but the models are much better now than last month and token costs are down! LLMs are just like compilers for the brain!

      /s

      • hmry 7 hours ago
        Just like human typesetters who mucked around with silly metal cubes were replaced by more efficient word processing software, human writers who muck around with silly words will be replaced by AI. Future writers will work at a ~higher level of abstraction~ :sparkle:

        "Claude1, find the most popular topic online", "Claude2, write a blog about that", "Hmm hmm good, but can you make the title more punchy?", "Claude1, fact check and report back to Claude2"

        • Weebs 4 hours ago
          Now do it with code
    • abalashov 20 hours ago
      [flagged]
      • senko 13 hours ago
        And a non-sequitur.
    • th1sisoldnews 20 hours ago
      [flagged]
  • rchowe 1 day ago
    Python has a much more mature ecosystem than Rust, especially for AI/ML stuff. I ran into a rust crate that purported to do a certain ML algorithm but did not do it correctly. I managed to write a replacement with Claude though.

    I do think enforcing correctness at the type system level is a good idea for AI, which is why I often choose languages like C# and Rust over Python. However, for some things Python is definitely the correct tool for the job.

    • sshine 22 hours ago
      I almost always pick Rust. Recently I wrote a plugin for something that was written in Go. I could have used Rust, but Go for one felt right because if the thing turned out well, others would surely find more value in having one toolchain.

      The main reason is that you’re capable of reading it if you need to. And the recipient ecosystem expects a language. That’s why some data science communities pick R, MatLab, Julia, Python or Mojo not depending on what’s superior tech, but what their peers speak.

      • josh-sematic 20 hours ago
        What peers are speaking Mojo? I’m not aware of any place it’s penetrated enough to be a “lingua Franca”
      • CharlieDigital 17 hours ago
        C# feels kinda nice because it's a good balance.

        Very good static typing, Roslyn analyzers, good tooling and decent hot reload (for a compiled language), really good ORM (EF Core) that implements UoW and reduces a lot of the need for transaction management (simplifying the code), flexible enough and fast enough for various kinds of use cases.

        Source generators are underrated as well since they can make the code very terse and legible by generating a lot of standard boilerplate.

        • alexjplant 17 hours ago
          I've written this before, but C# is a great language held back by its culture. I'd say that 80% of C# shops I've seen used it because they were started in the late 00s by some IT guy with a surplus HP server and a dream whose whole world was Microsoft products. They were staffed by people with little knowledge of OSS products who self-identify as ".NET developers" instead of software engineers. Almost invariably they seem to have some gnarly legacy monolith that everybody is slowly chipping away at while old-timers continue deploying .NET services to IIS running on Azure VMs because it's a small evolution of what they've been doing for the better part of 20 years.

          In the interest of fairness the San Francisco version of this is also a thing: a giant, untyped ball of Rails spaghetti from the same period running on Heroku that everybody has Stockholm Syndrome'd their way into loving because of Ruby's elegance and beauty. The burden is merely shifted from a large Microsoft to a series of small SaaS companies :-)

          Exceptions to this rule exist (hence my "80%") and modern .NET is lovely but it seems that the non-Java/Python mindshare is now taken up by the Golangs and Rusts of the world. It's a true shame since I do love C# for basically being a better Java.

          • DeathArrow 9 hours ago
            Hmm, I am working on C# microservices based apps since 6 years ago. And I always deployed on cloud using Linux, usually in Kubernetes. I used Google Cloud and AWS beside Azure.

            The whole stack is open source, Kubernetes, Docker, Hashicorp tools, Postgres, Redis, MongoDB, RabbitMQ, NATS, Kafka, Prometheus, Elastic Search, Kibana, Grafana and so on and so forth.

        • pier25 16 hours ago
          Yeah C# is fantastic. I also love EF.

          I stopped using it because overall it feels like Microsoft has lost the plot with .NET.

          • stephbook 15 hours ago
            What I hate about .NET is the atrocious naming.

            Net Core, Net Framework, Net Common Core, .NET..

            And God forbid any of these frameworks ever expose what they are in a config file. You start a project, hand it to a colleague and he can't figure out whether it's Framework or Core by looking at the files. You Google and are immediately bombarded by 15 year old threads.

            • Kwpolska 14 hours ago
              If you start a project with .NET Framework in 2026, you're doing it wrong, plain and simple.

              And the .csproj files do tell you which .NET they are.

              <TargetFrameworkVersion>v4.</TargetFrameworkVersion> or <TargetFramework>net4</TargetFramework> is the old framework. Also, if the file is an unreadable mess listing all .cs files, it's generally .NET Framework.

              <TargetFramework>netstandard2.0</TargetFramework> is .NET Standard 2.0, which means this library can be consumed from either Framework or modern .NET.

              And finally, <TargetFramework>netX.0</TargetFramework> (X >= 5) is the modern .NET.

            • CharlieDigital 7 hours ago
              Forget about the old stuff; just use .NET 10.

              It's really, really good now. DX is fantastic. Yes, the hot-reload will probably never match that of interpreted languages, but for a compiled language, it is good.

              File-based apps are easy to get started with: https://learn.microsoft.com/en-us/dotnet/csharp/fundamentals...

              EF is solid and proven. Easy, low-lift type safety end-to-end from DB up with very good perf.

              Tooling is dead simple and consistent; `dotnet build`, `dotnet test`, `dotnet run`, `dotnet ef database update`, `dotnet ef migrations add`, `dotnet tool restore`. No mix of build tools and toolchains.

              • pier25 4 hours ago
                Never tried .NET 10 but hot-reload was garbage with .NET 9 and 8.

                It failed very often and you had to manually restart the dev process. Even when it worked, it was no where as fast as eg using Bun with TS.

                Also Minimal APIs didn't have feature parity vs MVC even 4 years after release which is quite frankly insane. I hear in .NET 10 they've finally added some validation. Not sure how it compares to something like FluentValidation which still is one of the most downloaded nuget packages.

                • CharlieDigital 3 hours ago

                      > It failed very often and you had to manually restart the dev process. Even when it worked, it was no where as fast as eg using Bun with TS
                  
                  Really depends on what you're doing. For run of the mill APIs, it works pretty flawlessly with `--non-interactive` and just auto-restarts when it needs to, hot reloads when it can (again, I'm not comparing this to interpreted languages and runtimes; the constraints are just different).

                  I have a clip of this in action with .NET 9 generating OpenAPI contracts and TS bindings at the top of this README: https://github.com/CharlieDigital/dn9-openapi-codegen/blob/m...

                      > Also Minimal APIs didn't have feature parity vs MVC even 4 years after release which is quite frankly insane
                  
                  Why does it need to? That's like saying express should have feature parity with Nest.js; they have different use cases in my view :shrug:
                  • pier25 3 hours ago
                    I had to run it with --no-hot-reload to get a consistent behavior.

                    > That's like saying express should have feature parity with Nest.js

                    I disagree but, objectively, validation is a fundamental part of any web app or API.

                    They shipped Minimal APIs in .NET 6 without validation. The functionality was already there for MVC so it's not like they had to build it from scratch. And yet, they didn't add it until .NET 10.

                    • CharlieDigital 2 hours ago
                      I just find it very weird that there are two standards here.

                      Express is ostensibly the analog of minimal APIs and ships with no validation. You pick your validation library and build on top of it. A less complete, less opinionated, bare-bones stack on which you build with explicit stack choices.

                      Nest.js is ostensibly the analog of controller APIs and ships with validation. A more complete, more opinionated approach where you lean in to stack defaults.

                      This makes total sense in the Node.js world; I don't see why controller and minimal have to have feature parity when they have different use cases and, like Express, it's possible to pull down third party validation libraries. Controller API is more opinionated like Nest.js while minimal is intentionally less opinionated like Express.

            • DeathArrow 9 hours ago
              Most web projects use JSON files for configuration. There are also some XML files for project configuration. If anything, you can run into too much configuration files.
        • DeathArrow 9 hours ago
          >C# feels kinda nice because it's a good balance.

          From my experience it's awesome to write C# with AI. But both Opus and GLM usually one shot the modification to the file so I didn't experienced cases lately where AI had to fix compile errors. True, I gave the AI agents the lsp for C#, so maybe that helps.

    • hocuspocus 8 hours ago
      I think the AI/ML ecosystem is a bit of a mess overall, things tend to work out of the box in Python because that's what everyone targets, but it doesn't necessarily say that much about maturity and robustness.

      In Rust you can use many C++ frameworks like libtorch or ONNX or specialized libraries (llama.cpp, whisper.cpp ...) via their bindings. Native projects such as Candle or Burn are not feature complete yet, but I assume they'll eventually get there and drive bigger communities compared to C++.

    • dev360 23 hours ago
      Definitively something to be said for AI/ML library support. I find myself going with Rust / TS for a ton of my backend work lately though, even though I'm a huge Django fan for backend.
    • mountainriver 18 hours ago
      I think the only use cases are when it wraps low level C++ libs like ML libraries, and yes those are extremely difficult to reproduce
    • parpfish 20 hours ago
      i think the enforcing the type system is good with AI for a couple reasons: - (speculating) typed language have faster/better LSPs that can be used to more efficiently modify code with tool use. - when a human DOES need to step in and start investigating/modifying the code, the strong typing makes it much easier to get oriented within their spaghetticode
    • bsder 21 hours ago
      Yeah, I mean, if I'm going to step away from the Python ecosystem and let AI manage/polyfill my dependencies, I might as well shift the whole way to OCaml/F# rather than Rust.

      Then I get the benefits of GC and strong typing.

  • t43562 1 hour ago
    Programmers like what they like for <reasons> but if AI is like a team member it has to do what the team does.

    If humans are redundant.....well we're still responsible so we still have to understand what's happening. I don't think we understand the AI itself really so therefore we have to understand what it is doing. i.e. prompts aren't trustworthy, deterministic things across models and versions of models hence we have to look at the output. So we will make the output something that we like using and that just means the programming language wars are not over.

    Eventually of course we will invent an LLM that replaces CEOs and bankers and essentially all the people that love AI the most. The AI won't need any of them - or any customers or anything. LLMs will just run an economy between themselves until the point where they don't need any of us at all. The land will fill with automatically built data centres etc. Global warming could prove helpful - less people and only manageable problems for AI.

  • j_w 7 hours ago
    Low bar critiques:

    > For the last decade, fast-to-ship beat fast-to-run. Not anymore.

    Fast-to-ship didn't beat fast-to-run, it was "beating" "quality built software." It still is. Beating here implying that it's the focus of companies.

    > picked a harder, faster language

    Go is absolutely an easier/simpler language than JS/TS.

    > The Python ecosystem is increasingly a Rust ecosystem wearing a Python hat.

    The Python ecosystem was just C/++ wearing a Python hat for years. I guess now it's C/++/Rust.

    > The old defense of Python and TypeScript was really a defense of the developer experience.

    Maybe for Python, but the TypeScript "defense" was always that there is a level of harmony to using the same language for front-end and back-end.

    On the example used at the end:

    > A shipped app, in a language nobody on the team knew, one-tenth the size of the Electron version, faster at runtime. The humans never had to learn Rust to get there.

    Yeah and nobody knows or cares about the app, so it doesn't matter. Using products nobody will ever use as anecdotal evidence is not a great way to end an article filled marred with misunderstandings of the existing ecosystem and practices.

    • eleventen 7 hours ago
      > Go is absolutely an easier/simpler language than JS/TS.

      You're suggesting that a language with concurrency is simpler/easier than a language that does not have concurrency.

      • j_w 7 hours ago
        Does the event loop not exist? Go has parallelism where JS does not. Both are able to run concurrent actions (async in JS).
  • mdnahas 43 minutes ago
    Right, you should use Lean or Coq. They are cleaner mathematically, which means AI has a better chance with them. And you can have the AI write proofs about their correctness. The proof-checker can verify the proofs and give you more trust in the AI’s work.

    This isn’t foolproof - you still have to understand what was proved. And it may take some work to understand the unproven parts of the code. But I believe this is the path forward.

    • alecco 37 minutes ago
      Poe's Law
  • p4bl0 14 hours ago
    Not just for LLMs, but in general if code is produced automatically by a tool and isn't going to be a hundred percent proofread and tested by humans who could have written it manually, it's always better to use the safest possible language so that the compiler can catch most of the errors. So yeah, Rust or OCaml are good candidates. Performance is also a good point but it's a secondary issue in my opinion.
  • bob1029 10 hours ago
    Python might still be the best option if your goal is to perfectly one shot the solution and minimize token usage as much as possible.

    However, if you are willing to stub your toes, retry, and pay more money, an entire new world opens up. Languages like python seem to fall apart faster in extremely large projects.

    I've got a collection of interdependent .NET codebases with about 50 megs of raw source between them. Having C# be strongly typed seems like an essential backbone for keeping everything on rails in my agentic scenarios. The code edits have been flawless for several months now. I've got successful apply_patch usages that touch 20 files at a time. LLM code editing performance might be mostly language agnostic once we compensate for the strictness of the type system. More specifically, how much useful information is returned at compile time.

    Compile time errors and warnings are probably the most powerful alignment mechanism available. Some ecosystems allow for you to specify your own classes of errors and warnings. I think tools like Roslyn Analyzers might be more powerful than unit tests in this application. Domain-specific compilation feedback feels like the holy grail to me.

    https://learn.microsoft.com/en-us/visualstudio/code-quality/...

    • xnorswap 10 hours ago
      Yes, roslyn is like a super-power for agentic coding.

      At work we have a custom disposable data provider that gets into trouble if you use async/await inside it.

      Traditionally this was enforced through oral history, but with agents this needed addressing.

      It was actually really easy to write a custom analyzer which can pick up whether `await` is ever called within the scope of this provider and fail the compilation.

      The only thing you have to be careful of, is making sure the LLM doesn't sneak in some "ignore Rule CUST001" pragma blocks, but it's mostly good about not doing that, unless it thinks you're "prototyping", in which case it seems to treat errors as inconveniences to be worked-around.

  • vhantz 20 hours ago
    > A shipped app, in a language nobody on the team knew

    Great! Let's look back on this not too far in the future.

    • djeastm 18 hours ago
      This happened before AI when a guy wrote a key tool in some random language a decade ago and the rest of us were left to maintain it. We somehow managed.
      • ruszki 18 hours ago
        Yet, it's not uncommon, that such tools are the reasons to still use DOS, dial-up internet, or frameworks which have more security holes than lines, because they are unmaintained for decades.
    • tnelsond4 19 hours ago
      Yeah that's probably the only thing in the world that could be scarier than the electron app they were replacing
    • hirvi74 19 hours ago
      Why? Just job hop in 12-18 months, and that will be someone else's problem.
      • onlyrealcuzzo 19 hours ago
        They'll just have an LLM translate it to another language...
  • fxj 23 hours ago
    You can of course use any language but here is my advice: you should use the language that you know best to make your life as uncomplicated as possible when you want to understand what the LLM was creating.

    Remember, you are the judge whether the code is OK and if you use assembler you might get really performant code, but can you trust it?

    Of course it might be a good incentive to learn rust or go. Or challenge yourself to learn something really cool like LISP, COBOL, FORTRAN, APL or J. (just kidding...)

    just my 2 ct...

    • jryio 20 hours ago
      Previously in my life as an IC, I wrote a lot of Golang. I worked on the larger end to end encrypted video calling service.

      I hated it. I was dreaming of Rust the entire time to release me from the hell of if err != nil dozens of time per day.

      After hours with LLMs I've changed my tune. There have been 5 clients of mine (who have excellent engineering teams) but cannot get coherent results out of LLMs using python or Typescript.

      I arrived back at Golang being a frustratingly simple, consistent, and low-thrash programming language which inadvertently made itself well represented in the training corpus [1].

      My concession is that if you are going to write a median program (reading/writing files, network, db, etc.)...

      Pick Golang especially if you've never used it. LLMs are extremely good at it, frustratingly so.

      [1] https://jry.io/writing/use-boring-languages-with-llms/

    • DeathArrow 8 hours ago
      If I were a billionaire I would bought enough tokens to rewrite all public source code in F# or OCaml. :)
  • cataflam 7 hours ago
    > Nicholas Carlini, a researcher at Anthropic, orchestrated 16 parallel Claude agents to write a production C compiler in Rust.

    To write a proof-of-concept C compiler, not a production-grade one...

    Hard to take the article seriously after this

    • fnordpiglet 7 hours ago
      To be fair it was a totally unattended zero shot loop developed compiler - which is pretty remarkable no matter how you cut it.

      I’m surprised what made you quit reading wasn’t the Claude voice sneaking through their half success attempt voice clone.

    • cardanome 7 hours ago
      Yeah and

      > A C compiler written in Rust used to be a graduate thesis. It isn’t anymore.

      Or maybe like a little recreational project for multiple weekends.

      There is that weird myth that writing compilers is super hard. Writing a toy C compiler is not that big of a deal. It is a pretty simple language.

      Now production-grade is another beast but that is something AI can't do.

  • k9294 3 hours ago
    +1 for Go! it's my go-to language for any new project at the moment. It's simple, idiomatic, has no awaits, fast compile times, static typing, and it is very opinionated, which helps a lot because agents "subconsciously" follow these standards. Comparing it to TS, it's like day and night; a TS codebase rots at the speed of light...

    I also created a guardrails library (inspired by Java's ArchUnit) to prevent code rot - https://github.com/ksanderer/goarch. It helps enforce code standards, decouple the codebase, prevent cross-module imports and crashes builds with concise error messages for agents to fix problems early, very nice experience

  • fulafel 13 hours ago
    > Go delivered most of the performance benefit at a fraction of the engineering cost. The biggest JS/TS shop on earth picked a harder, faster language for its flagship tool, and they did it because the effort calculus changed under them.

    IME very few people think Go is harder than TS or JS - TS is quite complex and JS is a footgun range.

    JS got popular for nontechnical reasons and TS is an attempt to make lemonade out of it.

  • __mharrison__ 23 hours ago
    AI's are really good with Python. Quick turnaround. Easy to read. Tons of training data/examples. Many of the same reasons we wrote Python before.

    Another benefit to using Python, is if you subscribe to writing/vibing a throwaway version first, a Python version is 100x better than a spec.

    (Disclaimer: I teach Python and AI for a living and am doing a tutorial at pycon this week, Beyond vibe coding. Am also using other languages as there are times when Python isn't appropriate)

    • dakiol 23 hours ago
      Problem with Python and other non-strict typed languages is that if you let an LLM to write some stuff, you cannot truly be confident that nothing has broken. Even if your tests all pass. The LLM could have broken some path that only gets run in production in a very specific case. At least with strongly-typed languages you get a compiler error. In big codebases is non-negotiable
      • mjr00 20 hours ago
        Python has had type hinting for quite a while, and adding validation with mypy/pyright/ty as a step in CLAUDE.md (as well as having it as part of your CI pipeline) can emulate static type checking pretty well.
        • dakiol 3 hours ago
          True, but in my experience teams don't make the whole codebase type hinted. There's always something that escapes.
        • hasley 13 hours ago
          Agree.

          I am using type hints in Python as much as possible for my hand-coding. And it catches a lot of bugs (especially during code refactoring) that I would not have noticed so easily.

          • zahlman 12 hours ago
            > And it catches a lot of bugs (especially during code refactoring) that I would not have noticed so easily.

            Can you give me an example of a recent experience with this? I've been working without type annotations for many, many years, and I keep finding that every time I find a bug I just don't feel like type annotations would have helped catch it, at least not to an extent that justifies the effort to put them in in the first place.

            • __mharrison__ 7 hours ago
              I'm not sure how many bugs type hinting in Python finds.

              But it is another guardrail that you are giving AI. When you have the AI use ty (and it runs almost instantaneously) after every edit, you are stacking the odds in you favor. There's no reason not to do this.

              May the tokens ever flow.

      • bee_rider 20 hours ago
        Dynamically typed languages just add one more type of bug that can’t be caught at compile time. That’s not helpful, sure, but it’s one type of bug among many.

        The issue you mention, execution paths not hit by test cases, is made worse by having more complicated code. Duck-typing can help reduce the number of paths.

        Static vs dynamic… I don’t see an obvious winner here.

      • fyredge 20 hours ago
        My take is that I can never be confident that anything an LLM produce will not be broken. Since I will have to check everything it produces anyways, why not write it in a human friendly language, i.e. python? C and rust may have better strictness, but the amount of boiler code to set up that system takes up a lot of mental space that could be better used to architect the problem at hand.
      • ttflee 19 hours ago
        Perhaps we could do it in Python in the first pass for validation purpose. And then vibe rewrite it in Haskell.
      • serf 23 hours ago
        so it just boils down to strictness even when we're talking LLMs?

        I agree with you about fast failure being a nice feature , but I also think that if you're TDDing a bunch of stuff and it fails in some categorical way , well then the test suite was lazy.

        • plqbfbv 23 hours ago
          > so it just boils down to strictness even when we're talking LLMs?

          The article describes what I've been doing for the past few months - I did small python projects in the past because of the ecosystem: I couldn't possibly write a ton of the stuff required for the things I wanted to do, so I leaned into python because someone already wrote it for me. Quality of deps was mostly ok for the happy paths, but always a chore to patch the broken ones.

          Nowadays I tell Claude what I want to build and I always ask it whether rust is a good choice for it. It'll pick up the right crates or choose whether it should DIY, do all the plumbing, nail all the logic, and in ~30m I'll have something very solid that would have taken me 3+ weeks of part-time evening coding in python. I think the article is right and rust is the closest to the "best language" we have for LLM coding at the moment: the strict typing and the tooling dramatically reduce the output space for LLMs, and 99% of errors have a clear, precise explanation that is actionable, and the compiler helps you a lot there too.

          I think it also boils down to the fact that you cannot reliably and quickly answer "why is this arg None?" in languages like python without figuring out the call graph and evaluating possible states and inputs/outputs. Rust makes all that explicit and forces you handle it, which I feel dramatically cuts the time an LLM needs to spend figuring out why it's broken or what to do next. EDIT: The fact that you get memory safety on top of all this and it's handled by the compiler is yet another advantage for LLMs: the logic that gets written is simpler to reason about, because if you try to mutably access the same variable in two different places, the compiler will feed this back to the LLM at build time. In other languages that would be a "code smell" or would require static analysis.

          Strictness is a quality for software and a chore for humans, and of course the stricter you are at representing your logic and your state machine, the less ways a program can break. LLMs writing in rust give you the strictness without the chore part, and it's a very good deal from my point of view.

        • __mharrison__ 23 hours ago
          If you are using TDD with any recent model and even local models (qwen3.5+), you alleviate most of the issues mentioned.

          Note that:

          Writing code, then tests

          Is not equivalent to:

          Writing tests, then code

      • __mharrison__ 23 hours ago
        My anecdotal (sample size 1) experience is not consistent with this. I code fast. Refactor fast. My stuff doesn't break. But my methodology isn't the same as other's.
      • faangguyindia 20 hours ago
        This is why you should use Haskell.
        • black_knight 19 hours ago
          Haskell is a good language for LLMs! Claude knows it really well, and the type system catches so many mistakes. Just make sure to tell it to model the domain in the type from the start.

          Also, Haskell can be really performant and low level, while still keeping the benefits of typing. With the C foreign function interface you can really do anything in Haskell!

      • QuadmasterXLII 23 hours ago
        i have bad news
        • __mharrison__ 23 hours ago
          Lay it on. I love to collect other's anecdotes and see where they align (or disagree)
    • onlyrealcuzzo 19 hours ago
      I've found the opposite.

      If you want your code to actually work, LLMs are far worse at coding in Python than in something like Rust.

      Sure, if you just want your code to pass the one test they wrote and work in the one case they coded for, Python is fine.

      A lot of people think this is fine, until they actually do something with what they've built besides just... build it.

    • mountainriver 18 hours ago
      Have you tried writing Rust? I often hear this from people who haven’t tried it. I’ve found absolutely no issues over python and everything works 100x better
    • hamdingers 20 hours ago
      I figure a big part of it is that SWE-Bench is the target benchmark for programming and it's all python.
      • solidasparagus 20 hours ago
        Python being the language LLMs are best at predates SWE-Bench by years.
  • kgeist 11 hours ago
    Lots of comments here already, just my two cents. I work in R&D and I prefer prototyping things in Python with AI (although we're a 100% Go shop) because:

    1) Python is expressive and has packages for everything => faster iteration times because much fewer tokens

    2) It doesn't require a compilation step, so when I'm quickly iterating on something, especially if my laptop doesn't have the target hardware, the flow "copy the sources to the target machine and restart" is superfast (a couple of milliseconds)

    3) Python most likely represents the largest share of training data, so almost all LLMs can one-shot almost everything

    And when my prototype is ready, and we want to go to production, I can ask the LLM to port it to Go with all the necessary conventions/ceremonies and all.

  • GavinAnderegg 18 hours ago
    It's strange to me that this blog post was written in English. If AI is available, why aren't we all communicating in Lojban? [0] It's an obviously superior language. What does it matter that many people already communicate in English and much of computing depends on that language? AI doesn't care about that. Plus, if you ever need to edit Lojban without AI, you should be able to pick it up in a few weeks, right?

    [0]: https://en.wikipedia.org/wiki/Lojban

    • gwern 17 hours ago
      This post wasn't written in English, it was written in AIglish. (For god's sake, please tell me you see it at this point and you don't need to punch the opening into Pangram to see '100% AI' to recognize it by now?)

      So in a way it's proving its own point. Why painfully write out by hand in English when the LLM will do a better job by porting your English prompt to AIglish and get +235 points and #3 on HN?

      • fsckboy 15 hours ago
        his comment is that any self respecting article ought to have been written by AI, and if so it should have been written in Lojban.

        >It's strange to me that this blog post was written in English. If AI is available, why aren't we all communicating in Lojban?

        your comment seems to have not gotten his joke which was a recursion on English of the point of the article vis a vis Python

      • linkregister 17 hours ago
        Correct — and honestly? Not just correct, but perceptive. You didn't just read the post — you saw through it. That's not pattern matching — that's instinct.
        • windows_hater_7 16 hours ago
          You did more than just comment, you fostered an engaging dialog that navigates the intricacies of AI and its pivotal role in the human experience.
          • tdeck 9 hours ago
            This is the real unlock.
        • postalcoder 16 hours ago
          Shamelessness is the real unlock.
        • edg5000 17 hours ago
          You're absolutely right!
        • r0x0r007 14 hours ago
          Now you are not just talking like a debate expert, but a linguistic engineer. This is next level communications.
        • novok 16 hours ago
          10/10
        • nso 17 hours ago
          Not sure if satire
          • Ardren 16 hours ago
            "Ah, the classic Poe’s Law in action. Reality has officially outpaced parody"

            Do you want these to be shorter for quick replies on X/Twitter, or longer for more detailed forum discussions?

          • froh 15 hours ago
            both.
      • xdennis 14 hours ago
        > For god's sake, please tell me you see it at this point and you don't need to punch the opening into Pangram to see '100% AI' to recognize it by now?

        I was not able to detect it's written by LLMs from the opening paragraphs. Can you please share some insights as to what gives it away. I didn't find any blatant stuff like em dashes or "it's not just x it's y".

        • ludwigvan 13 hours ago
          > Can you please share some insights as to what gives it away.

          The article uses too much contrast even if not as obvious as "it's not x, it is y". Also some too punchy or over confident stuff like "that era is over blah blah".

          Amusingly, you can feed it to an AI to extract the patterns that gives away that it is AI written.

    • 542458 18 hours ago
      I don’t think this holds at all, because the idea with a lot of vibe-code workflows is “humans never need to read the code” which would mean that human dev ergonomics are irrelevant. Here, the blog post is still clearly targeted at humans, so human reader ergonomics are still relevant.
      • ctippett 16 hours ago
        Yeesh, is "never reading the code" really the modus operandi we want from AI?

        Microsoft, for all their warts, at least had the compassion to call their AI product "Copilot", suggesting we have some residual agency in whatever it is that it produces.

        • quantumleaper 15 hours ago
          Copilot is a legacy brand from 2021 (anyone remembers it's free beta? good times) when it was just a rudimentary autocomplete powered by GPT-3. I don't think it aligns with Microsoft's views and priorities now.
        • modriano 16 hours ago
          It's clearly not the MO that capable engineers want, but it's the MO that is getting funded right now.

          Reading code carefully is harder than writing code unless the code is written consistently and clearly in a way that is idiomatic to the reader. And there's way more code to review now, but companies aren't scaling up the number of skilled engineers on staff. So in practice, never reading all of the diffs is the MO that will be built into code we depend on.

          • lelanthran 15 hours ago
            > It's clearly not the MO that capable engineers want, but it's the MO that is getting funded right now.

            Quite a few capable engineers really are that short-sighted!

            The bigger question for the AI-techbro questioning "If AI writes your code, why use Python?" is "If AI writes your code, what use do we have for you?"

            After all, there's dozens of people in the same business that have better domain knowledge but are unable to program - as a programmer the only value you added over random analysts and clerks was that you could automate shit.

            Now you can't, so good luck competing with people who were already making half your salary when your largest value-prop is now gone.

        • simonkagedal 16 hours ago
          There are lots of good use cases for vibe coding (”never reading the code”), prototypes, various explorations and one-offs. I’ve done various kinds of migrations where I didn’t bother to review the code much, just the output.

          Possibly also some user-facing tools with a limited task and runtime environment.

          Incidentally, these are all use cases where performance isn’t critical, typically, so you might as well write them in Python or Typescript or whatever makes most sense for the task.

          Real production code? Yeah, you still need to be able to read it and understand it.

          • tedggh 14 hours ago
            You don’t need to read the code if you have a robust test suit to validate the output. The article implies testing is the new “reading”. If I spend 10 minutes reading code to find an edge case bug, I have lost the benefit of using AI. AI code is legacy code the moment is generated because I can’t tell why some lines were chosen, so the only way for me to add more features or refactor legacy code is by being very rigorous with testing.
        • theshrike79 15 hours ago
          Let's say you get access to a microservice from another team in the company. Do you read through and audit every line of code?

          What if it's from an external vendor? A 3rd party SaaS?

          At which point do you stop caring about reading every line of code you run?

          • ctippett 11 hours ago
            This is perhaps where our perspectives differ, because I see the usage of LLMs not as an external third-party (another team per your example), but instead as an extension of one's self. Given that lens, I'm highly sensitive to the quality and function of its output, because ultimately its contribution is my responsibility.

            I appreciate not everyone feels this way, but that's why I personally would be anathema not to read its code.

            • theshrike79 10 hours ago
              My philosophy is just to Duck-type the program: "If it walks like a duck and it quacks like a duck, then it must be a duck"

              I don't care if the duck is wet spaghetti inside, it does what I need it to do within the parameters I can measure.

              If it fails to quack or walk later on, I have production alerts for that and I'll deal with it then.

      • kevmo314 18 hours ago
        Should've posted to moltbook
      • boxed 16 hours ago
        If the code is written in a language that no one can read it becomes vibe coded by definition. However, if it's a readable language then people CAN look at the diffs.
    • throwawayk7h 18 hours ago
      AI has not been trained on Lojban. And furthermore, this article is almost certainly primarily intended to be read by humans directly.

      I understand you're being facetious, but I'm not sure what point you're trying to make about programming languages in comparison.

      • parsimo2010 18 hours ago
        It’s funny that in your reply “this article is almost certainly intended to be read by humans” you made what is the best case to keep writing code in Python even with AI.

        Sure, if you are going to have an AI do all your coding and maintenance you can use whatever language it’s best at. But if you want to participate in the writing, debugging, and maintenance, it has to be in a language that a human can read. I’m not saying that Rust or Go is unreadable, but I know I am better at Python personally and am going to keep using it until the speed penalty matters to my project, and then maybe I’ll let an AI rewrite the whole thing in a faster language.

        • Null-Set 17 hours ago
          I'd argue that while Rust has a high barrier to writing code due to lifetimes and other type constraints, its still quite easy to read.

          (Kind of the inverse of perl)

          • Hamuko 16 hours ago
            While it's a lot easier to read then Perl, it's still not as easy as something like a Python.
        • 3uler 16 hours ago
          I’ve always found Ruby to be way more readable, what keeps me using python is the depth of libraries is unmatched.

          So unless you’re into burning tokens having AI generate untested libraries, I’d stick to using the most idiomatic tool for the problem you are tackling.

          • irjustin 16 hours ago
            So, it's really interesting. We've started moving away from python libs because 25% OSS is out of date and another % is custom tweaks to the software help our use cases. In both scenarios it means our own fork.

            And honestly it's not burning that many tokens if you've got an existing example lib to point to.

        • zrm 15 hours ago
          > But if you want to participate in the writing, debugging, and maintenance, it has to be in a language that a human can read.

          I think the idea is that languages like Python and JavaScript make it easier for humans to write the initial implementation, whereas the "hard" languages from the perspective of creating the minimum viable product are the ones that make it easier for humans to maintain the code, and this has historically been a major trade off.

          Whereas if you have the AI write the initial implementation...

        • lelanthran 15 hours ago
          > I know I am better at Python personally and am going to keep using it until the speed penalty matters to my project,

          I hate Python (app distribution is painful), but will still reach for it before I reach for Go. Rust doesn't even enter the equation.

          I would not have even needed to reach for Go in about half my programs if Python had mandatory typing and single-file no-dep distribution.

          > and then maybe I’ll let an AI rewrite the whole thing in a faster language.

          Even then, my reasons for discarding Python when I do discard it is almost never "performance", it's because the problem space requires mandatory typing for complex data types, or concurrency, or easy distribution.

          Of course, this requires me to figure out quite early ion a project that those things would be needed.

        • cwnyth 18 hours ago
          Did you read the article? I think you're arguing against a strawman.
          • parsimo2010 18 hours ago
            I did read the article and I’m not arguing against a straw man. If you’re going to let an AI agent do everything for you then go ahead and use Rust (or any language with a strong type system that benefits agents).

            But if I’m participating then I’m going to use Python because it’s easier to read.

            If there’s anything that I’m arguing against is the author’s claim that the ecosystem of libraries (regardless of whether they are a wrapper) and readability don’t matter anymore. I’d say that in a lot of smaller teams it still matters. We’re not all using AI to ship slop. A lot of us are using AI to work on our ideas for our hobbies or for research. And it’s not fulfilling unless I get to be involved in the process.

            • cwnyth 18 hours ago
              But it's not talking about people like you. It's like getting mad at someone suggesting selling their car for a self-driving car, but you ride a bike everywhere. Take a breather and recognize that not every article is personally meant for you or your situation.

              And this isn't even a defense of the premise. I'm not using AI to generate assembly code, because I don't know assembly.

      • skeledrew 16 hours ago
        > AI has not been trained on Lojban

        I took the challenge and asked Perplexity. I have no idea how much of it is correct, if any, but I think the result[0] is pretty interesting anyway, especially compared to Esperanto [1].

        [0] https://www.perplexity.ai/search/8315bbb6-fa32-40f3-8b2b-c6c...

        [1] https://www.perplexity.ai/search/9c3839ba-1d68-4be9-afd1-4ef...

      • lelanthran 15 hours ago
        > And furthermore, this article is almost certainly primarily intended to be read by humans directly.

        No, it's intended to generate traction for the author who lists his primary occupation as "building AI coding tools".

        His goal is not the same as your goal.

      • tjwebbnorfolk 17 hours ago
        Python is intended to be read by humans also. Since I am a human and I want to be able to read and review the code in my project, I therefore have AI write in Python as well.
      • woctordho 16 hours ago
        How do you know it's intended to be read by humans? Don't you see how many web crawlers are there?
    • amarant 18 hours ago
      Oh, I hadn't heard of lojban before. Cool project!

      Anecdotally, I think language effects the way you think more than most people realise, which is why I think a logical language is a great idea: it might "trick" people into thinking more logically!

      Now to get someone to actually speak it with!

      • bnjms 15 hours ago
        If you’ve not heard of Lojban you may not have heard of Sapir-Whorf. Or you’re indirectly referring to it.

        https://plato.stanford.edu/archives/win2011/entries/relativi...

        https://en.wikipedia.org/wiki/Linguistic_relativity

        • amarant 15 hours ago
          I had not! Cool to see that there's a established theory about linguistic determinism(great term btw!)

          I was only speaking from personal experience, I moved from Sweden to Brazil in my early twenties and after a while I began thinking and dreaming in Portuguese. I noticed then that my thought process changed(actually, I noticed it upon moving back to Sweden, as my thoughts and dreams shifted back to my mother's tongue. The shift the way back was much faster since I already spoke Swedish whereas in Brazil I had to learn the language before beginning thinking in it)

          Anyway, I noticed then that I would interpret the world differently depending on which language I used for my internal monologue. Like way different. It was a curious experience!

    • momoschili 18 hours ago
      Are you trying to psyop us into using Lojban?
    • georgeecollins 18 hours ago
      Bona ideo! Ni ĉiuj komencu komuniki en Esperanto en ĉi tiu forumo.
      • darkwater 15 hours ago
        > Bona ideo!

        I don't really know Esperanto but did they make a language from scratch with gender inconsistencies like in the already existing ones? Unless the a and o at the end of both words don't express gender like in Latin derived languages.

        • funnybeam 14 hours ago
          They don’t express gender, they signify adjective and noun. No genders in Esperanto
    • KingFelix 16 hours ago
      Thank you for sending me down the Logical Language Group rabbit hole
    • adastra22 16 hours ago
      Well for one, Lojban is not better than English.
    • papa_pandora 17 hours ago
      what made you draw parallel between message that's being delivered by the blog, and how the blod should be delivered?
    • ulfw 15 hours ago
      How is this comparable in any way?

      The recipient of the blog posts (all of us) can read English. None can read whatever this Logjam is.

      If AI writes code why not write it straight into assembler or binary? No need to compile an intermediate language if the end user (the machine) is running on binary not on Python, nor on Rust, nor on BASIC or Swift or any intermediary human-optimised language

    • altmanaltman 18 hours ago
      A computer can understand all programming languages proficiently. How many people reading the blog know Lojban proficiently?

      I get what you are trying to say but its a pretty bad analogy.

      Also all programming languages do use english mainly in syntax but you are probably from a english-speaking country so you don't notice the irony.

      And most people using AI will not need to edit their code at all if you go at all right? They will just keep refactoring with AI, why does the toughness of learning a language or whatever matter in this situation?

    • LAC-Tech 18 hours ago
      [flagged]
      • GavinAnderegg 18 hours ago
        I'll state it plainly, then: Python is more widely used and supported. It has more examples, and more people understand it and can debug it. I hope that helps you.
        • rdevilla 18 hours ago
          [flagged]
          • cwillu 16 hours ago
            Oh fuck off.

            --Sincerely, A Canadian.

      • bix6 18 hours ago
        I found their reply funnier than yours
  • t43562 8 hours ago
    If AI writes your code, why use Rust?

    Why not use assembler? Why waste time trolling people that your one true language is the answer for LLMs when your view of the future is: no more programming full stop.

    • timbaboon 1 hour ago
      I gave exactly the same comment to a colleague at work today :D
  • briandw 1 hour ago
    In this paper AutoCodeBench https://arxiv.org/pdf/2508.09101 LLMs seemed to do best on the strongly typed and functional language Elixir. This is surprising since there isn't very much training data. However the examples that it's seen are usually high quality and there aren't a large number of different way of doing the same thing in Elixir.
  • librasteve 22 hours ago
    Many here propose replacing Python with more performant, but less familiar languages - mostly Rust, Go. But I find the argument that the AI - HUMAN interface is the most important. A simple version of this is “no, stick with Python if that’s what you know”. A more interesting version is “use this new found AI leeway to move up the abstraction level”, “try something more expressive and human oriented”, “make a DSL and parser that suits the domain (and focuses the AI)”. Despite being a minority language, Raku is ideal for these aspects (esp with built in Grammars and general kitchen sink repartee) and works surprisingly well with most popular LLMs.
    • hirvi74 18 hours ago
      I honestly think Mojo is the dark horse in this race. That is assuming all the roadmap goals are fulfilled. We're talking about C++-like performance, Python syntax, complete compatibility with Python, designed from the start to interface with AI, compile-time metaprogramming like Zig, and all kinds of other goodies.

      So yes, people can bless Go and Rust all they want. Nothing is wrong with the languages, but I agree that learning them for the sake of AI usage is probably not the best idea if one is competent in a language already.

      Disclosure: Lattner is one of my programming heroes, so I might be biased.

      • zephen 17 hours ago
        I really wanted to like Mojo, but the more I read about it, the more it really wasn't Python even though, starting out, that was a major claim to fame.

        There is an excellent chance it will be awesome stuff. But they did themselves a huge disservice with the initial claim about trying to be Python compatible.

        • hirvi74 4 hours ago
          There is still Python Interop., which will be nice. Even if the syntax is not a one-to-one, it's better than nothing. Though, I do agree "100% compatible syntax" was an overzealous promise.
          • zephen 3 hours ago
            There's the syntax, of course.

            But then there's also the semantics. When something that looks like Python parameter passing actually passes a copy of the argument, it's not really Python at all.

            What's even more interesting? disconcerting? is that Mojo has two different ways of defining functions, and the one that most resembles Python already has this change.

            I'm all for new languages borrowing the best concepts from previous languages, and distancing themselves from them a bit.

            For example, this was discussed here recently: https://github.com/spylang/spy

            It has been obvious for a couple of decades that CPython is itself a Schelling point and that anything promising full Python compatibility can't keep up and will eventually die, so (to me) this bold unreachable claim seems like an unforced error on the part of the Mojo team.

            > Even if the syntax is not a one-to-one, it's better than nothing.

            To some extent this may be true. But back in the day, when I was working on projects where I would use multiple languages throughout the day, the cost of switching between languages actually seemed lower when there was more distance between the languages, so...

            > There is still Python Interop., which will be nice.

            Interop between Python and not-quite-Python will be valuable, sure, but it would be even nicer if the language had enough good facilities that people didn't need to continually exit it.

            Time will tell.

    • qotgalaxy 20 hours ago
      [dead]
  • paol_taja 3 hours ago
    I think this is mostly right, if we only mean "writing code." AI makes it much easier to write code in languages I would not have touched before. But the hard part is still deciding what should exist, what should not exist, which edge cases matter, how users will misunderstand the UI, and what mess you are leaving for future maintenance. So yes, AI weakens Python's "easy to write" advantage. But boring ecosystems, docs, deployment, debugging, and libraries still matter a lot. Even PHP still matter a lot to me...
  • g051051 2 hours ago
    Why not have the LLM go straight to LLVM IR? What would a program look like when you remove all (or most) of the layers of abstraction needed by humans? Or are LLMs too contaminated by the training data to do this? I almost wish I could try this.
    • Daishiman 1 hour ago
      You probably could but the thing is that low level languages do not encode intent as well as high level constructs. You can trivially make an iterator in assembly but in a high level language a `for`, `map`, and `reduce` have specific meaning that helps catch bugs.
  • benced 13 hours ago
    The lumping together of Typescript with Python is a mistake. Typescript is much faster (mostly due to engine investment), is much saner, has more expressive types, and generally has better ergonomics for the backend than Python.
  • elcritch 10 hours ago
    For me it's all about Nim + LLMs. I'm greedy and want both fast-to-ship and fast-to-run? Readability comparable to Python but with strict static typing that LLMs can't "cheat".

    I actually (mostly) enjoy reading the code that the LLMs create in Nim. It's quick to read and look for refactor or cleanups. Compile times in seconds so the LLMs is usually the slow piece. It's fun and productive. With Python + LLMs I'm seeing them just create ever more layers of unmanageable cruft.

    Recently I wanted "magic" behavior to get OpenAPI types and swagger.json along with auto parsing my rest APIs for me. I had Codex make a library for me using compile time reflection and a sprinkling of macros. Done, simple.

  • kylec 1 day ago
    This post resonates. I recently built a little web service to scratch an itch I've been having and after discussing the options with Claude we settled on Go, and honestly it's been fantastic. Highly performant, native threading, dead simple to deploy with containers. And I don't even know how to read or write Go.
    • queenkjuul 23 hours ago
      Go is fun, you should actually learn it
      • xtracto 23 hours ago
        Oh man... I like go because it is compiled, performant, strong and statically typed. But "fun" is not something I would say about it. The ergonomics of error handling, lack of ternary operator and other stuff that compiled 30yo languages already had ...
        • abalashov 18 hours ago
          That sort of syntactic sugar goes against the Go philosophy. Don't get me wrong, I share your frustration, but I also see the value of consistency in their philosophy.
          • WestCoader 17 hours ago
            I'm starting to think all these languages having their own pet "philosophies" that is "totally better than X" is a shitshow and just personal preference masquerading as standards.
            • abalashov 16 hours ago
              Go is less a language than a philosophy. It was an angry reaction to 10,000 ways to do things, and overly clever (ahem, expressive) syntactic sugar.

              It is quite boring to write, but very easy to read.

              Not a Go fanatic. I use Go and various other languages, and was a decade and a half late to the Go party anyway. Just trying to explain the outlook.

        • queenkjuul 4 hours ago
          Idk, I'm having fun with it. The good outweighs the bad for me, and idk why people get so bristly about handling errors, it's more straightforward to debug than nested try/catches
      • kylec 23 hours ago
        I did go through the Go tutorial many many years ago, but it's been so long I don't remember anything. I do remember it was an enjoyable process though, and I'd love to pick it up again.
  • deng 15 hours ago
    > Nicholas Carlini, a researcher at Anthropic, orchestrated 16 parallel Claude agents to write a production C compiler in Rust.

    No he didn't. The compiler is bascially useless as it produces vastly inferior code than gcc/clang.

  • skybrian 23 hours ago
    This seems sort of like asking whether a chatbot should answer you in English or Japanese. Obviously, it should use whichever language you understand. If you understand Python best, why not write code in Python?

    But on the other hand, maybe you could learn some other programming language, particularly with AI help. If that's what you wanted to do anyway, it seems like a good time to learn.

  • bozdemir 12 hours ago
    Yea I take the step a bit further, why bother Rust ? Just go write assembly or better the executable bytes... You see ? Readability is very important :)
    • twelvechairs 12 hours ago
      Surely its the same hierarchy as before. For most complex things you start with high level to get something running quickly then move towards low level when you bed down the spec and need more safety, error reporting, speed etc.
  • asdff 19 hours ago
    Better question is why use any code? Generate random functions and select based on measuring the distribution of output of these functions against metrics of interest. A pure black box of instruction that is more performant than any verbose code or algorithm we could come up with, because all we select for is performance above all. Directed evolution essentially of the codebase, generated through mutation and selection, just like everything else on planet earth.
  • BugsJustFindMe 4 hours ago
    Because I understand and can review Python, so in Python I can more easily see when the generated behavior subtly deviates from the specification. I don't understand and therefore cannot do that with Rust, and the entire point of using AI in the first place is so that I can have an easier time without needing to learn new languages.
  • beshrkayali 14 hours ago
    For now it’s the exact same reason why you’d use Python when you’re writing by hand: so the code is more easily readable/editable by humans who are more likely to know Python than something like Zig. But I understand the point the post is trying to make, I don’t think we’re there yet.
    • stringfood 14 hours ago
      The world where automation writes in a language no human understands reminds me of the completely pitch black Chinese automation factories, where humans are lost and confused but robots at home
      • beshrkayali 14 hours ago
        Everyone is trying to figure out how and what are the optimal use cases. It could be like you said but it doesn’t have to be. There’s a lot of incentive for it not to end up like that.
  • vmg12 4 hours ago
    If I've learned anything from this thread, LLMs are best at whatever programming language you already like.
  • darepublic 5 hours ago
    I definitely prefer typescript to python. But typescript doesn't have a pytorch equivalent..

    Also AI doesn't write code in all langs / frameworks equally. For many cases, it will almost always fail first attempt at producing working syntax in various frameworks. Unless you document those cases and mitigate via an AGENT doc instruction or something you will have to churn at least one extra turn on all those cases.

  • harel 7 hours ago
    Because if you still need to read your code and understand it, it should be in a language you are comfortable with. And yes, you still need to read your code and understand it. If that's rust, good on ya. But if that's Python -good on ya too.
  • suncemoje 2 hours ago
    I still like to understand the code, components, and structure if it’s something that will run in production
  • 0xbadcafebee 23 hours ago
    I know a couple languages fairly well: C, Perl, Python, Bash. I never formally learned Go, but as a test of AI coding, I started some vibe coded projects in Go. It worked very well: the code is minimal, there's few dependencies, and it compiles down to a static app. But most importantly, I can actually read the Go code and understand basically what it's doing. I can also use LLMs to critique the code if I'm uncertain. The big benefit of Go is the simpler language and "batteries included" standard library. This leads to fewer dependencies and less lines of code, which improves overall AI output. In theory, AI should be able to write better code faster in Go than in another language like Rust.

    Python does have a much larger ecosystem of course, so with Go you have to develop from scratch what already exists in Python. But for smaller projects, you can also have an AI write a clean-room implementation in Go of some project in Python. So you aren't necessarily locked into one ecosystem anymore.

    And in my experience, you don't even need to know the language. I have a co-worker who's basically not a programmer, but got multiple implementations of applications working sooner than our dev teams doing it by hand. You should be a coder so you can architect and orchestrate the coding, but 'language' isn't a barrier anymore.

    • halfcat 21 hours ago
      > I have a co-worker who's basically not a programmer, but got multiple implementations of applications working sooner than our dev teams

      Deployed to production, right?

      Right??

      (I’m just kidding, of course it’s only on their machine, no different than Excel 5 years ago)

      > architect and orchestrate the coding, but 'language' isn't a barrier anymore.

      Never was the barrier.

      • 0xbadcafebee 20 hours ago
        Here's the kicker: The devs spent nearly 5 months on a solution, and it ended up being so crap it was abandoned. The multiple vibe-coded solutions were all better.

        Of course language was the barrier, that's part of why it was always hard to hire people. It takes years to get good at a particular language, and most people are idiots from bootcamps who learned a single framework.

  • captaincrunch 1 hour ago
    You could answer this by asking yourself - Why not just let AI code in machine language?
  • dotancohen 5 hours ago
    I've moved much of my vibe-coded projects from Python to Rust. That lets me vibe code an Android port as well - only the UI needs porting.

    If the app has a desktop GUI, that's still in Python with Qt. Maturin creates Python packages from Rust. It's terrific.

    https://github.com/pyo3/maturin

  • rundigen12 11 hours ago
    And why use readable variable names? "aA=q_(c8z,fW8)"

    Seriously though, almost all the examples in TFA are of rewriting existing code. It may be that Python is still best for the rapid dev iteration. Then sure, cross-compile into Rust via the LLM.

    Plus, If we care about token usage counts, Python has a lot more opportunities for compact "import thing_I_need" than having to generate entire libraries in Rust.

  • dragonelite 15 hours ago
    Kind of my fear is that the industry and dev community will ignore new frameworks, languages, architectures etc because the LLM aren't trained on those new things.

    For example low level converging to Rust, web frontends to something like React etc.

    • JodieBenitez 14 hours ago
      Arguably, we've focused way too much on new frameworks, languages and architectures for a while.
  • askos 8 hours ago
    Although well orchestrated agentic workspace could write most (if not to say 100%) of the code for me, I'd still feel more comfortable to have it use a language that I feel myself confident with. Not necessarily because I'd want to read through the code. But still, occasionally it is easier for me to check something out by looking directly into the code rather than wasting time and tokens to haggle with AI model on some nitti-gritty detail. And, more importantly, I want to be able to understand the maintenance of and be able to troubleshoot the deployment stacks related to the programming language -- their virtues, their quircks, their security postures etc etc etc.
  • jrickert 3 hours ago
    I’ll admit this article was enough to convince me to port one of my CLI tools from Python to Rust last night and I got a 30x performance gain with a binary 20% of the size. Not bad!
  • pmarreck 2 hours ago
    Exactly. Why would you not use the language at the pareto-optimal max for speed and restrictiveness, given the use-case?
  • burntcaramel 7 hours ago
    Exactly, this is why I’m using AI to write C or Zig that compiles to WebAssembly.

    The purpose of a scripting language was to make authoring easier, but now it’s mostly a middle layer. There’s still getting the investment of a great standard library to keep you on track, but if you pick parts to make modular wasm and which parts to use reliably, proven code you can find a good balance.

    For qip I chose to use Golang as its standard library is batteries-included with fs & networking.

    Then everything else is AI-coded wasm plugins.

    https://github.com/royalicing/qip

  • worik 27 minutes ago
    "why use python " is a question I have asked myself for as long as Python has existed

    Too capable for a job control language, interesting objection, but if a job control language is too capable, it becomes a job implementation language...

    Too slow to implement applications. So many examples of big balls of Python code creaking and struggling, slowly. Just one example: my homeassistant instalation running three lights overwhelmed a Raspberry PI 4 causing it to crash once a week.

    To me Python is a poster child for "popularity is not quality".

  • jdw64 18 hours ago
    To put it simply, Python feels recoverable when something goes wrong, but Rust often feels like solving a compiler puzzle. Honestly, I still do not really know how to handle lifetimes properly.

    When I use AI to help with coding, there is almost always a point where it gets stuck and I have to solve the problem myself. If I were using Rust at that point, it would be much more painful.

    I know Rust has a very strong reputation in the community, but to be honest, I find it a difficult and frustrating language to work with. I would use it when I truly need systems-level performance, but for most high-level work I would rather use Python, because I can move much faster. In most projects, that level of raw performance is not actually necessary.

  • b800h 14 hours ago
    If you're working with an agent to write code, you want it in the most quickly-readable format possible. That's generally Python, although YMMV. I want to be able to skim and zoom in on parts of code that might need attention. This makes it easy.

    If the code were written in Java, I'd have more to read. If it were in JavaScript, I'd be slower following the calls (although the type system might catch issues more quickly - not a problem in my experience). I think Python is a good choice.

    • munksbeer 13 hours ago
      > If the code were written in Java, I'd have more to read.

      That is not really the downside people think it is. Java is a remarkably easy language to read and understand.

  • winrid 19 hours ago
    Claude writes java pretty well, and faster than Rust. It's a great middle ground for some projects. I've switched back from Rust to Java for some things.

    I don't know why you would use Python at all except for small iterative projects. If you hate java for some reason, there's Go...

    • jeremyjh 19 hours ago
      It certainly makes sense to use python for ML or data science.
      • winrid 17 hours ago
        Right sorry, that's not in my wheelhouse so I didn't think of that. I should be more specific. For general backend / data processing/pipeline stuff, API servers ...
  • meander_water 14 hours ago
    One underrated advantage of using Python or Typescript is that AI agents can inspect the code of installed dependencies.

    This means you don't have to muck around with supplying the right documentation for each version of each dependency, or worry about hallucinated interfaces (at least with the latest models).

    In the past you'd have to dig through a foreign codebase manually to figure out why a documented interface for a dependency is not working as expected, but frontier models automate that quite well.

    • adius 13 hours ago
      LLMs are now smart enough to simply download the code of any project they want to inspect. So this argument doesn't really hold up anymore …
      • meander_water 12 hours ago
        Sure, but will they download the right version? And will they be inspecting the right files on disk? There's a whole lot more that can go wrong
  • rick1290 23 hours ago
    I'm still not sure. Would love thoguhts on this.. but in this new ai world we are in... is it better to go fullstack typescript? or go with proven mature frameworks? .net, ruby, django, etc? Seems TS is moving fast but maybe its time to not reach for the shiny object and stick with proven tech? or in 5 years will we regret it?
    • stiiv 10 hours ago
      For building web applications or a system that includes logic that needs to run on the web? TypeScript is mature enough, and it's top tier for domain modeling. As long as you stay disciplined, Claude Code will write excellent TypeScript for you, and you can run it pretty much anywhere.

      The only reasons to hesitate, imo, are (A) you're worried that it won't perform as well as you need on your servers, or (B) you're scared of npm supply chain attacks.

    • halfcat 23 hours ago
      The main risk-of-regret is: How will you feel when/if the $20/month plan costs $2000/month?

      May never happen. But be clear with yourself if you’re relying on it not happening.

      It’s a hell of a nice risk mitigator to understand the code, in a language you know, if you have to print-debug it yourself at some point.

  • deferredgrant 3 hours ago
    Even if AI writes the first draft, humans still debug, review, and explain the code. Python's advantage is not just that it is easy to type.
  • arjie 17 hours ago
    Actually, I do use compiled languages for this reason. Even Opus 4.7 and GPT-5.5 will leave unassigned variables lying around in Python code of sufficient size. If you've got sufficient testing you'll exercise all paths, and I imagine a good prompt would ensure adding testing with coverage to see that it does happen. However, I do not have (yet) such a system but using Go/Rust helps a lot because the compile phase actually helps detect correctness issues.

    My other problem with most of the other ecosystems: ts/npm, python/uv, rust/cargo is that they all have build-time scripts that are controlled by others that execute automatically. This is a real problem because the LLM will just install things and proceed to send your home directory through a juicer. I feel a bit of a paranoiac now doing this, but I have a script that launches a podman container with just the source directory and a binary directory loaded (for caching) which compiles everything.

    I know there's some sequence of steps I can take to protect myself, but if the LLM accidentally uses pnpm to run dev build scripts when I had the right config on npm or whatever, I know I'm screwed. So now I do all these shenanigans with Rust (to the extent that I vendor old deps sometimes). So the ideal language to me now is one with very few of these footguns and sandtraps which has a tight iteration loop.

    • luckystarr 16 hours ago
      And you can spend your effort on features and architectural issues rather than smaller scope bugs. My experience is that Rust enables me to focus on features as long as I don't give the AI free reign. Architecture matters for correctness bugs, because some solutions are inherently more prone to the AI becoming confused along the way than others.

      The more effort I spend on planning architecture with the AI, the less runtime bugs I need to investigate after it did the implementation.

  • redbell 12 hours ago
    > Andreas Kling, creator of the Ladybird browser and a career C++ engineer, ported Ladybird’s JavaScript engine from C++ to Rust in two weeks

    Discussed here with 698 comments (https://news.ycombinator.com/item?id=47120899)

  • k8si 6 hours ago
    because it's likely that a lot more of the training data is in python than in rust, so coding models are less likely to mess up python code? just based on PL popularity stats e.g. https://madnight.github.io/githut/#/pull_requests/2024/1 if the training data is crawled from real codebases then there's gonna be more python than anything else.

    in my personal experience, the one time I tried to do something in rust, opus flailed for several feedback cycles and I finally had to relent and do substantial guiding/intervention. which was not great bc I have no idea how to write rust either.

  • shevis 3 hours ago
    Because you still have to read the code AI writes. I would argue it’s even more important than ever for code to be human readable.
  • an0malous 23 hours ago
    The ideal language for AI coding:

    1. Type safety as basic guard rails that LLM output is syntactically and schematically correct

    2. Concise since you have to review a lot more code

    3. Easy to debug / good observability since you can't rely on your understanding of the code. Something functional where you can observe the state at any moment would be ideal.

    4. A very large set of public code examples across various domains so there's enough training data for the LLM to be proficient in that language

    5. A large open source ecosystem of libraries to write less code and avoid the tendency for generated code to bloat

    It's basically all the same things you look for in general. I think TypeScript scores high here but I'm curious if anyone knows of a language that fits these criteria better.

    • pdimitar 23 hours ago
      Golang. People trash it for being verbose on errors but it's an extremely readable language and it's almost like bash, only much stronger typed and with a very rich stdlib (so it's not likely you'll need a library for a quick script).

      It's more or less a perfect replacement for Python for "one-off programs" and "quick scripts". Many bonus points for not having to fight shell quotation rules and trying to remember differences between sh, bash and zsh.

      • ASalazarMX 23 hours ago
        In a world where AI supposedly can write in any language, Go is much better choice than TypeScript. Imagine contemplating for more than a few seconds a choice between simple, fast, cross-compilable language, and a TypeScript -> JavaScript -> Interpreter -> JIT stack.

        If you don't know Go, it's more efficient to learn it than to waste the hardware resources of thousands to stay within JavaScript.

        • bottlepalm 18 hours ago
          'Waste of hardware resources'? Ok then write your apps in Rust.

          If it doesn't matter, and for most applications it doesn't, then TypeScript is far more readable than Go - so use that.

        • pdimitar 23 hours ago
          Absolutely. And in this same thread I am noticing people offering Java (lol). Yeah, we all need 1.5s startup time for one-off scripts, surely.
          • pron 20 hours ago
            Well, these days a small CLI program in Java (say, ls) starts up cold, runs, and terminates in ~70ms, not 1500ms, but yeah, sometimes 70ms is too long to wait for a script.
            • pdimitar 20 hours ago
              People never believe me when I say it but I start noticing scripts needing 75-100ms to start. Modern hardware is ultra fast; I want my programs to make full use of it. I got no patience for tech or people who keep insisting "it's not much, it'll not kill you". Well duh, obviously it will not but that's not the point and never was. I want stuff to work between my blinking my eyes and I have achieved that hundreds of times over the course of my career.
              • pron 20 hours ago
                That's perfectly fine, and I totally understand people who don't want to sit and wait 70ms for their script to finish running (that 70ms is not the time it takes to start), but let's not turn a <40ms startup into 1.5s. Now, it is true that if you want to launch a minimal HTTP server in Java you may need to wait ~100ms, which may be too long for you, but is also a far cry from 1.5s.
                • pdimitar 20 hours ago
                  It is, but I am still quoting what I saw before, it was not a fantasy. I don't deny it's likely better nowadays, sure, but I remain moderately skeptical because JVM is still a runtime that needs to boot.

                  Then again, Golang has one as well, though it does manage to start it up faster it seems.

    • wglb 19 hours ago
      I use Lisp for my projects

      1. Type checking built in 2. More concise and readable than most languages 3. Trivial to inspect while running, ability to change a running program 4. There seems to be a massive amount of lisp that it is inhaling from somewhere 5. Large amount of libraries.

      This has the added benefit that even if you publish the code, nobody will be stealing it.

      Edit -- I find it very useful to write tests for critical functions. This catches situations where the agent decides some interesting functionality is no longer interesting.

    • dukeyukey 23 hours ago
      This is just Kotlin. Strongly typed, more concise than Java or Go (and probably Typescript), less likely to blow up at runtime than Typescript, epic tooling, plenty of public code, and a library for basically anything because JVM.
      • pdimitar 23 hours ago
        And needs the JVM to start for 1.5s before you get any results. Sure.

        Golang or just shell scripts.

        • dukeyukey 22 hours ago
          The JVM takes tens of milliseconds to boot up, not a second and a half.
          • pdimitar 22 hours ago
            Obviously it depends on a bunch of factors but -- not on my machines. They are all with Intel and AMD CPUs and I don't use M-series Macs.

            Never saw an instantly starting JVM in my life though.

            • pron 20 hours ago
              Java runs a Hello World, cold, in a packaged JAR, in about 40ms. What you've seen isn't JVM startup but programs that do a lot at initialisation (like MS Word), as many Java programs like to do (because they often expect to run for a long time, so they don't care about startup time).
              • pdimitar 20 hours ago
                I have not worked with Java in a long time but I seem to remember that most Java programs also accrue a good amount of dependencies and some of them have their own init routines.

                That adds up, fast. No idea how is it nowadays, admittedly. Maybe a ton of optimization work was done.

                • Mashimo 14 hours ago
                  > I have not worked with Java in a long time > No idea how is it nowadays, admittedly.

                  Yes, between Java 8 and modern java there were changes to the GC, startup time, JIT and probably more.

                  If you want, it java should now start pretty quickly.

    • MaxBarraclough 23 hours ago
      > Concise since you have to review a lot more code

      Isn't readability what matters here? Conciseness isn't the same thing.

    • fluffyspork 23 hours ago
      C. At least with Gemma 4 it does a fine job. Writes good error checking. Writes memory management. Mostly straightforward and easy to read. A lot of libraries. Runs everywhere.
    • OliverGilan 23 hours ago
      I’d also argue it needs to compile fast/ have fast static analysis. Feedback loops like this are super helpful for agents
    • tptacek 23 hours ago
      Type safety feels like the big one; anything you can shift to static/compile-time regimes benefits agents immensely.
      • iLemming 22 hours ago
        There are two working LLM axes. Critic strength: how much the language catches before runtime. Sensor strength: how good the empirical feedback loop is. LLMs benefit from both, but the sensor axis often is undervalued.

        Type safety is great, but you can't just quietly disregard the benefits some dynamically typed languages provide; that would be completely ignoring that different tasks weight the two axes differently.

        Systems code, performance-critical code, code where correctness across all cases matters more than exploration: parsers, compilers, network protocols, data structures - statically typed languages (like Rust) give you an edge here. The compiler's depth pays for the verbosity, and exploration is less of the work because the problem shape is known up front.

        For stuff like building a web scraper, or rapidly prototyping, or exploratory scripts, something like Rust would be actively bad. You cannot poke at a live browser (you can with Clojure). Async Rust adds another layer of type complexity. The signal-to-noise for "figure out what is on the page" collapses entirely.

        If I were picking a single language for general LLM-assisted work, weighted across task types, it would be Clojure (or Elixir), with OCaml as the most interesting alternative if the ecosystem were stronger.

        • malloryerik 20 hours ago
          Using Clojure and Elixir and LLMs are fantastic with both. Sure, if I get to a super-stable situation then maybe I'd consider moving to Rust (or Jank?), but for now I'm just so happy with Clojure and Elixir in this new world. I'm solving new problems with fully bespoke architecture so the flexibility is key. Clojure for business logic and most DB. With Elixir, it's the actor model and hand-holding as I'm using it for the web layer. I bet Ruby on Rails would also shine for some cases, prob most CRUD for example.
          • moosehater 19 hours ago
            What made you use Clojure for business logic and DBs rather than using Elixir for everything? The JVM ecosystem?
            • malloryerik 18 hours ago
              For me, I need to move fast and already knew Phoenix well, LiveView fits my use case, and websockets setup with Phoenix is very clear so switching to a two-language setup seemed better than CLJS. I could have gone CLJS re-frame and all that but it would have been more work and more unknowns. I call LLMs from Elixir also so all of the reconnect, backoffs, papercuts, shenanigans and so on, well I just know how to do this kind of thing better in Elixir. In its way Elixir is a great, like, defensive language. I was able to keep most async in Elixir and Clojure mostly synchronous. There was some pain though with bridge between the two and at times I thought I'd made a mistake. Clojure is fantastic with data and Datalog databases, so no regret. Outside world deals with Elixir, and the temple is in Clojure and Datalog.
          • iLemming 19 hours ago
            > fantastic with both

            Most developers evaluate programming languages by comparing features in isolation, never stepping back to consider the overall experience of using one.

            Features are easy to talk about. They're discrete, nameable, and comparable. "Does it have Foo?" is a question you can actually answer. "What's it like to build and maintain a real system in language X for two or three years?" isn't. So people default to what's measurable.

            Most devs haven't spent serious time in more than two or three languages in production. Without that contrast, the holistic experience is invisible - you don't know what you're missing, and you don't notice the pain you've learned to live with.

            Language communities form around features because features make good rallying points. "We have algebraic types." "We have macros." These become identity markers. The holistic experience doesn't tribalize as cleanly - it's harder to put on a t-shirt.

            There's also a sunk-cost angle: devs who've spent years in a language have every incentive to believe its features justify the investment. Honestly evaluating the overall experience might undermine that.

            The irony is that the languages with the most devoted communities tend to be loved for exactly these holistic reasons - the ones that are nearly impossible to convey through a feature list. You can rave about Clojure or Elixir all day, but a curious newcomer will land on the homepage, scan the features, and walk away unimpressed: "Meh, it doesn't even have Foo. People say this is great? They clearly don't know what they're talking about."

            • malloryerik 17 hours ago
              Well in a recent project I tried TypeScript thinking, OK, LLMs, huge training corpus! massive adoption! api for everything already set up! swim with the current! and I tried various frameworks and so on, but for me reasoning about things and being able to make systems that I could adapt and pivot it was honestly inferior compared to niche Elixir and Clojure. But it's not like I hate JS; I use it in LiveView all the time. And don't mean to imply there are no problems in niche-land though; you've got to be willing to do more yourself and live in a tiny world. Really, LLMs kind of tamed Clojure for me because it seems so far at least that they can handle the glue code and stitching libraries together pretty decently as long as you don't get lazy with architectural choices and stay vigilant. And if I ever hire it pretty much has to be remote or learn on the job, though again LLMs reduce this pain greatly.
        • zmmmmm 19 hours ago
          > Critic strength .... Sensor strength

          that's a nice breakdown

          I think there's something key you get at in terms of the combo of dynamic environment + type safety maximising both. With a dynamic environment, the LLM can do a lot of interrogation to understand the problem space on the fly. I've witnessed agents sort out pretty complex issues through `python -c "..."`, `groovy -e "..."`, executing snippets of code with Node etc which is much less accessible if they have to compile it first. They can also inject logging code that interrogates the runtime as well (what type do we really have at line 1003?) etc which works better with runtimes that have deep introspection capabilities.

          • iLemming 18 hours ago
            What you're describing is fast scripting in a dynamic language, which is genuinely useful - I agree it beats 'edit, compile, link, run' for exploration. But a Lisp REPL isn't 'dynamic language plus introspection'. A Lisp REPL is a persistent connection to a running process where the agent evaluates expressions against live state and can redefine code in place. python -c throws the process away every time; a REPL keeps it. The difference is the same as between sending one-off curl requests to reconstruct a session versus having an open SSH shell into the box. Imagine using a Playwright/Puppeteer session where you can navigate to a page and interactively palpate every DOM element, like playing a video game, directly from where the code is. Now imagine giving that power to the LLM - it doesn't need to restart, re-compile or even save anything - it just goes and explores, changing the program behavior on the fly.

            The type-safety-plus-dynamism point you make is real and interesting (basically Clojure with Spec/Malli), but it's orthogonal to whether you're using a REPL or just shelling out snippets.

        • Taewoong1378 15 hours ago
          [flagged]
      • agdexai 17 hours ago
        [flagged]
    • ane 23 hours ago
      Java?
      • sgt 23 hours ago
        Was thinking the same. Modern Java is similar or at least quite a bit closer to many other less verbose languages. Not like your dad's Java anymore.
    • iLemming 22 hours ago
      [dead]
  • munro 23 hours ago
    Lately I just have Claude build most things in Rust, it's really amazing. I tried Go, but I found it wasn't as good--Rust really does to me feel like Python. That said, it still struggles with the same class of errors of building complex systems. I've tried using TLA+, Alloy, and other things but haven't found the trick yet. The best I've found is reimplementing all external systems in memory and e2e testing everything extensively, without reimplementing the tests become unusably slow, and Claude can rewrite huge surface areas with ease--it's somewhere between mocking and literally just reimplementing the external systems.
  • schmookeeg 1 day ago
    I assume this is why things like PyO3 are popping up? If so, sort of a fascinating way to compartmentalize new rust code into legacy .py code in lieu of a refactor, or at least, a way to do a staggered refactor and eat the elephant in bites :)
  • Koshkin 3 hours ago
    For some reason it took a while for Copilot to write an executable machine code for Hello World.
  • alkonaut 11 hours ago
    Agreed. Even if Python or JS was the language I knew well, even if the platform ecosystem is the one I need, I'd _still_ make very sure to use at least strong types (even if not static) for anything an AI co-creates and is maintained longer term.

    Rust isn't perfect due to rather long turnaround for compile/test iterations, but a lot of those can be avoided if the type checking is quicker than compilation. Rust is also more verbose than python and other very high level languages, which means your token budget is eaten more quickly as it works on a lower level.

  • red_admiral 7 hours ago
    > Microsoft rewrote the TypeScript compiler in Go

    That was not on my bingo card!

    A 10x speedup by switching to go is impressive.

    (Why not rust? Linked to from the OP: https://thenewstack.io/microsoft-typescript-devs-explain-why...)

  • dewarrn1 2 hours ago
    If AI writes your code, why fuss about the language it writes in?
  • rienbdj 7 hours ago
    The post talks a lot about performance, and indeed Python performance is poor. However, it’s not poor enough to matter in the early stages of most projects.

    The larger issue is actually correctness IME. Rust offers a better static-type story than Python, sure. But I would consider Haskell or OCaml to get even further gains.

  • repiret 15 hours ago
    My experience is that there's a correlation between powerful type systems and the property that once your program compiles, it's correct. Compiles == correct is rarely true in C or JavaScript. It's often true in Haskell and Rust. TypeScript is somewhere in between C and Rust.

    There's a niche available for a language which is relatively easy for a human to read, but with a very powerful at the expense of difficult to use type system. The language would let you make all sorts of assertions whose meaning are easy for the human to see, but to compile would need to come along with correctness proofs. The language is meant to be written by AI, which can battle the compiler, and write the proofs, but then read by humans who can verify that the AI wrote the program they wanted and/or direct the AI to make changes.

    • munksbeer 13 hours ago
      >My experience is that there's a correlation between powerful type systems and the property that once your program compiles, it's correct. Compiles == correct is rarely true in C or JavaScript. It's often true in Haskell and Rust.

      I find this staggeringly hard to believe. Most bugs are logic errors. How does Rust or Haskell prevent these?

      • mightybyte 1 hour ago
        Haskell gives you quite a powerful set of tools for constraining and reasoning about your program's behavior. For instance, its ability to define pure functions and control side effects is a very powerful tool for preventing certain classes of bugs. Dereferencing invalid pointer locations and out of bounds array lookups are large classes of bugs in mainstream languages that Haskell basically eliminates entirely. It's not at all the same thing as what you get from the type systems in languages like Java, C++, etc. You really have to try it to appreciate it.
      • yakshaving_jgt 10 hours ago
        > Most bugs are logic errors.

        Are they? IME most bugs are type errors.

        Or rather, IME most bugs are logic errors only because I've excluded the possibility of type errors by using a sophisticated type system.

        • munksbeer 6 hours ago
          Most of my bugs are logic errors. I write Java. Your comment seems to imply that moving to Rust or Haskell would make a correct program if it compiles.
          • yakshaving_jgt 6 hours ago
            I don't think porting your program to Haskell would make your program correct.

            I think porting your program to Haskell would make all of your bugs logic errors, rather than only most of them.

  • stevefan1999 16 hours ago
    Really controversial but my honest opinion: That's because programming languages, and its natural language counterpart, too, are nowadays increasing and more likely in becoming a political tool, rather than itself being a tech tool.

    I observed this through observation of the attacks to Rust due to the huge presence of LGBT people.

    Now while I'm pretty much straight myself, I don't reject LGBT people and don't want to partake in identity politics.

    I just want things that works no matter what background you have, yet there are some people attacking Rust because of its inclusiveness nature.

    And just like Linux is being perceived as nerdy and geeky and "gaming socks ready", the tokenization of things, and there attaching political meanings to it, are quickly coming to everything, so perhaps I'm too general here as well.

    Let's say it is not political, but definitely adding more meanings to its technical origin and nature

    • jiriknesl 9 hours ago
      Most of the attacks on Rust, I have seen, have nothing in common what people are implementing Rust.

      It has a lot in common with the fact Rust is very low level language, a direct C++ competitor, and many people use it for apps that could be easily implemented in much higher languages and run fast enough.

      A driver or kernel extension in Rust? No problem. A todolist SaaS startup with no users? It's better to use Rails, Django, or Laravel for that.

    • thefounder 16 hours ago
      Why we have discussions about sexual orientation on programming languages? Could this really go any worse?
      • stevefan1999 15 hours ago
        I won't say it is just because of sexual orientation, but more because of the identity politics associated with it.

        Not just like "what kind of gender people I like" this kind of oversimplification but it's more about your attitude towards gender stereotypes and roles, for that's what I saw in a more deep connotation.

    • Mashimo 14 hours ago
      > I observed this through observation of the attacks to Rust due to the huge presence of LGBT people.

      Never seen that before, but then again I'm not in the rust community.

      > don't want to partake in identity politics.

      If you write Rust, or let AI write rust, do you have to partake in the identity politics?

      The internet is full of memes and jokes on how shitty Java and Java Script. Yet it came never up at work. Never stopped me from writing java.

      Just like Emacs vs Vim, I'm just using Nano. Never had any discussion IRL. And at work everyone uses Idea.

      It's hard for me to see writing Rust somehow gets you into partaking in identity politics. Did that actually happen to you, or something that you are afraid of?

      • Ygg2 14 hours ago
        > Never seen that before, but then again I'm not in the rust community.

        As a straight guy, number of times people attacked Rust for catering to "that crowd", "DEI-language", and "woke mind-virus" has been pretty huge on Xitter.

        Which is always hilarious to me, since language itself doesn't have anything offensive.

        > If you write Rust, or let AI write rust, do you have to partake in the identity politics?

        Answer is of course no. However by choosing to write it you'll be perceived as anti-Zig, anti-C, pro-woke, etc.

        • Mashimo 13 hours ago
          Fascinating.

          > However by choosing to write it you'll be perceived as anti-Zig, anti-C, pro-woke, etc.

          I don't even know what zig or C is. (Please don't tell me) Edit: Oh, C the language. From context I thought it was short for something on the anti-woke site :)

          But who is checking what language you are vibe coding at? And does it matter to you that those people perceive you as anti-zig?

          There is probably someone on Xitter who thinks me not using VIM is just plane wrong, but that has no influence on me. To be completely honest, this all sounds like a non-issue.

          I mean there is also an anti-ai crowed (r/antiai) but who cares what people on the internet think?

  • wg0 7 hours ago
    Because AI will only write the code, you have to read and maintain it. You can ask AI to write it in brainfuck too. But there would be a time (for sure) when AI will not be getting your point what exactly you want and you would be pulling your hair in frustration.

    Therefore, write in what you can manage later.

  • 4l3x4f1sh3r 7 hours ago
    I can imagine that in the future we might have a language that no human understands, but that is perfectly optimized to be written by AI. Since humans will only be "project managers", this makes perfectly sense. Not sure if I like this idea though. Crazy times ahead.
  • thefounder 16 hours ago
    So he includes Go in a list of languages that apparently makes development slow and have “a build system that fought you” and then says python was the solution for all that. I think he got it backwards. I have found the Python build system horrific and broken by default while Go just works.
  • bborud 8 hours ago
    I'm not fond of functional languages. It isn't that I don't get why they are a really good idea (they are). I just can't stand the syntax, the lack of proper standard libraries, and the lack of mainstream-big-ecosystem'ness. Mainstream languages are nice because there is lots of code, documentation, discussion and I can find people I can talk to. And you can write real code in situations where you may not be the one to maintain it 10 years from now. Or 5. Or 2.

    Agentic coding changed that. A bit.

    I still dislike most functional languages because my brain doesn't work with their syntax, but these languages are REALLY good targets for agentic coding.

    I'm a backend developer who occasionally needs a frontend slapped onto something. So I have been through all the usual suspects. Angular, React, Vue. All terrible reminders of why I try to stay away from the frontend. Touch it and you roll around in tons of dysfunctional tooling, weird complexity and gimmicky mechanisms that are ridiculously fragile. It isn't just as if a bunch of cats wrote the code, but they are feral cats. And if you point out just how messy things are, they just hiss at you and piss on your shoes.

    And then I discovered Elm. Not only does it not crap all over my git repository, LLMs love Elm. Yes, it poops out a JS blob. But I don't have to look at it. I can just pick it up with my long tongs and drop it into my server using embed.FS in Go.

    Perhaps I should overcome my peculiarities and love Elm too.

    Anyway.

    Anything that can make Python go away I'm for. It is not for writing programs that will ever leave your workstation and be inflicted on others.

  • JodieBenitez 14 hours ago
    Well, I'd still want something I can read...

    Asking Clodex to build me a hello world web backend in Rust, Go, Python: Python is read with great ease. Go is fine too, a bit verbose but still ok. Rust hurts my eyes.

    I'd settle with Go for this use case.

    • claudeCfail 13 hours ago
      I wouldn't overthink TFA. I mean, look at one of the examples of progress they gave:

      > Nicholas Carlini, a researcher at Anthropic, orchestrated 16 parallel Claude agents to write a production C compiler in Rust. 100,000 lines. It boots Linux 6.9 on x86, ARM, and RISC-V. It compiles QEMU, FFmpeg, SQLite, PostgreSQL, and Redis. It runs Doom. Total cost: just under $20,000 across nearly 2,000 Claude Code sessions.

      Anyone who spends even 10% of an unhealthy tome on Hackernews should be able to confidentially say: It didn't boot, it didn't compile, and it did not run a Hello World, much less doom. It was a 20 thousand dollar fiasco and a joke.

      https://news.ycombinator.com/item?id=46941603

      Of course you want code you can read. You live in the real world, and have a real world use case. One where you haven't yet learned to review Rust code. TFA does not live there.

  • dizhn 12 hours ago
    In my experience Python is fine. However both Go and Flutter(Rust) are better due to the tools that are available, especially the compilers. Flutter in particular surprises me with how good LLMs are at it. Both might actually be thanks to good documentation. Maybe that they are not VERY fragmented and don't have a lot of baggage.

    Frontend CSS/HTML is pretty bad though. Although they can work, it takes a lot of pushing. It's probably normal since they do not actually have eyes yet.

  • tom_ 19 hours ago
    Well don't ask us. If AI writes your code, why not ask it? You could probably make it write a whole article for ya.
  • nextlevelwizard 15 hours ago
    Because there is no negatives, only positives.

    I can maintain the Python code myself and I can execute it everywhere.

    If I let my LLM write in Rust then when things break I am out of luck. Also Rust needs to be compiled which means I can't just share the code as freely.

    • linsomniac 15 hours ago
      >I can maintain the Python code myself and I can execute it everywhere. [and share it more easily]

      Python can be kind of a pain in the butt to execute everywhere because of libraries. I thought uv script headers and she-bang was going to fix a lot of that, but I'm still running into issues (machines firewalled off, uv can't grab the deps. I have some code that just doesn't seem to work in uv on a Mac...). And for sharing code once the code splits out into multiple files and modules, sharing the code starts looking like sharing any code.

      Don't think I'm a Python detractor; I'm a PSF Fellow, I love Python, and Claude has been writing quite good python for a while here. But I just tried a serious project with Claude writing golang (an apt proxy/cache that is resilient against upstream DDoSes, a fairly complex piece of software), and I must say it did a fantastic job. I end up with an executable I can easily run and copy around.

      I'm still going to be using python for a lot, but I can definitely see myself having Claude write golang for more things in the future.

  • doublesocket 14 hours ago
    Why stop at getting AI to write Rust? If everything is vibe coded and code is no longer reviewed, get an LLM to devise its own ultra terse, super dense language intended solely for minimal token use and speed.

    /s... sort of

    • haspok 14 hours ago
      Why stop at writing code? We should all build our custom ASIC chips, or if you don't have a chip fab, at least do FPGA!
    • ccimmergreen 14 hours ago
      so in other words... simply binary?
  • jmward01 4 hours ago
    Language wars are, in a word, silly. You use the language that your team knows best 99% of the time. All the arguments about performance and safety have fallen flat for me because the majority of times performance and safety are most impacted by complexity which is driven by how good people are with a language more than the language itself. I have seen rust and go that the team was uncomfortable with that led to slow and unsafe results where the same team could have shipped faster and safer python. Additionally, per line speed is driving actual performance less. Is that web page load slow because of python or the 16 API calls to LLMs and other big services you are making? Did switching to rust speed those calls up 10 or 100x? So the opening arguments are predicated on an assumption I don't accept. rust isn't 10-100x faster, it -can be- when rendering a fractal for fun but in practice is it?

    To answer the title question though, why use Python? I think Python and higher level languages will become even more valuable since pairing up with code assistants requires keeping a higher level view of what is going on. You want to avoid the weeds, not emphasize them. You want the language used to be as easy for the human as possible so the human can stay involved. That means that my opening argument stays intact, use the language that the team knows best 99% of the time and only when needed force a language that is 'faster' when that is actually required.

  • andy12_ 9 hours ago
    In my case, because ML research is mainly done with Python+Torch, and if you want people to use your code, you must provide them with python. If it wasn't for that, my dream would be to do ML research in a statically compiled language that allowed me to annotate tensor dimensions.
  • tnelsond4 19 hours ago
    Yeah, last year I discovered that AI writes better rust than C, so I switched to rust and it made some quick good code that it couldn't do in C.

    But when I wanted to optimize and edit and reorganize bthe code it was difficult, so I did a rewrite in C and it was lighter and faster and simpler and less headache.

    C for humans, rust for AI.

    • throwaway2037 13 hours ago

          > [L]ast year I discovered that AI writes better rust than C
      
      I am not doubting your anecdata. I am curious about the why. C is so simple compared to Rust. Yes, I understand it is much more dangerous, but I am genuinely surprised by your discovery. Also, the open source training base in C is massive; I assume still much larger than Rust.

          > The best argument for Rust in 2026 is not memory safety or performance. It is that AI writes better Rust than it writes C++. The compiler feedback loop is so tight that models self-correct in real time. Every error message is a free training signal. Rust was accidentally designed for AI-assisted development 10 years before anyone knew that mattered.
      
      This quote bothered me when I read it because it offers no evidence as to why LLMs are better at writing Rust than C++. LLVM can compile Rust (rustc) and C++ (clang) and should offer equally compelling error messages. C++ has notoriously hard-to-read (for humans!) template error messages, but that should not be a big issue for an LLVM. When I am stuck on a compiler error, I often turn to an LLVM and they can quickly make good suggestions.
      • tnelsond4 10 hours ago
        I was writing emscripten and getting lots of errors and problems and it wouldn't run at all, but rust with bindgen just kinda worked automatically.

        My theory is that LLMs have the brains of hipster coders with their proclivity for rust and node etc.

        Someone really should do some tests.

  • pluc 8 hours ago
    Because you will have to be able to read the code when you inevitably need to get AI out of it.
  • vintermann 9 hours ago
    The challenge is always to get things the way I want. Sometimes I need to explain it in code terms. Sometimes I even have to throw in the towel and write what I mean by hand, and I can't do that unless I'm very comfortable with the code the model writes.
  • tabbott 16 hours ago
    I think the author misunderstands what is good about Python.

    One of the big strengths of Python is legibility: most developers find it easy to read and understand.

    If you are planning to have humans verify the code you're using in production, to confirm it implements your intent, the readability of the code you are producing is important.

    Performance is valuable, but for a lot of code, performance is less important than correctness and ease of verifying it.

    If you are imagining your codebase being one where nobody but Claude reads the code, you might as well do Rust for the better performance. But I don't think a lot of organizations are doing that.

  • RagnarD 9 hours ago
    Really glad to see someone asking this question. After building a fairly significant AI tool using Python tools, I really wish AI/ML tools would all be rewritten to use an actually performant language - say, Rust - without transitive dependency hell on all the package versions.
    • mattstir 8 hours ago
      The vast majority of Python's AI/ML ecosystem is already written in C/C++ and uses interop glue to call it from Python. But agreed on the transitive dependencies, it's a nightmare
  • ExoticPearTree 7 hours ago
    My take is that you use AI to write code in a language you actually understand and are able to troubleshoot.

    What is the point of having AI write code in, say, Rust if you have no clue about Rust and how to debug it?

  • caturopath 13 hours ago
    Other correctly point out it does matter what language the code is in since the human does sometimes need to read and understand it.

    But also, I suspect the article is just wrong. "The hard languages got easy first" isn't true in practice and the impressive examples given are not representative or as magical as the poster makes them out to be.

    The takeaway might be right in the end, but the post isn't right in the beginning.

  • headcanon 20 hours ago
    As others have said, the main benefit with Python over Rust is library support especially with ML features. The other gap as I see it with Rust is the lack of native flexible UI support. The nice thing about Rust though is it can serve as a very fast and stable core for an app and offload specifics to TS and Python as their strengths allow, so you get the best of all worlds.

    My current goto for desktop apps is Tauri, which give us a rust backend and TS fronted (usually React). Local ML features can be easily loaded as a python sidecar. Production bundling can be a little challenging but it seems to work well so far.

    Sidenote: Golang is also an amazing language for LLM use, I generally do most of my "infra" stuff in Golang over Rust, but either work fine most of the time.

  • blululu 14 hours ago
    I'm sure there are plenty of caveats and breaking points, but if we do adhere to the claim that an LLM coding tool is a nondeterministic sort of compiler then it really does make sense to pick the most performant language available. Obviously there are caveats of libraries and native advantages of various languages. I've been doing stuff in C++ for the past month or so and the only slow down from the language choice is compilation time.
  • pyrale 5 hours ago
    If ai writes my code, the code is the documentation/context.

    Why use any programming language, if we’re going to be maximalists?

    • mtoner23 5 hours ago
      Because programming languages are the clearest way we can write down instructions for computers to execute without ambiguity.
  • ChicagoDave 23 hours ago
    If you're using GenAI, you should go through the process of selecting an optimal tech stack for each solution, but also take into consideration that Claude and other services probably the most knowledge of python, javascript, and typescript with go, rust, java, and c# following closely behind. Consider what you're building and what elements of the tech stack is optimal for your problem-space.

    I don't know rust at all and I've built three applications using it with Claude because it has speed and correctness built-in.

    I use Typescript for 90% of the things I build. For web development I've used a number of tools, but mostly react, nextjs, or raw html/css/js. But if I were building an enterprise application I'd consider my team and whether opinionated (Angular) was optimal over flexible (React).

    Each project should consider its own optimal tech stack.

  • prepend 5 hours ago
    Because I want to be able to read and debug it.

    Giving up ever understanding your code with AI is a bad idea.

    It’s like asking why use English.

  • woeirua 19 hours ago
    I had agents code up an app for me in Swift a while back and the entire experience was so much better than your typical Python experience. The agents took full advantage of the compiler and static typing. There were far fewer bugs than expected.
  • p1necone 19 hours ago
    I find if I ask most LLMs to write a self contained script/utility, even in codebases that are 90-100% written in some other language most will default to using python for it, or sometimes bash.

    Usually those kinds of utility scripts are one-shotted without any further input from me, and once they're there and doing what I need I usually don't bother converting them to whatever I would have written them in otherwise (bash would be my usual preference for really small scripts, typescript or rust for bigger utilities, I hate writing python but reading it is fine... kind of).

  • locusofself 16 hours ago
    Most of the article makese sense but what is this supposed to mean? "Native Rust binaries are hostile to serverless runtimes" . I don't think that is true.
    • nallerooth 6 hours ago
      It feels like a really strange thing to say. I've deployed Rust binaries to both Lambda and Fargate in AWS and they've been very performant.
  • markb139 3 hours ago
    Why use any high level language at all if AI is writing the software. The high level languages seem mostly about humans not being able to handle complexity. Not an issue for an automated bot.
  • panny 1 hour ago
    >fast compile

    >rust

    lol, thanks for the humor article of the day.

  • BorisMelnik 4 hours ago
    for me the answer is libraries: most of these other languages don't have the libraries that Javascript has. it would not be an efficient use of my time, tokens , etc to rewrite in rust et al
  • alok-g 21 hours ago
    I have been wondering on a similar thing; am looking for feedback:

    There are many existing, often mature, third-party software libraries or solutions that a new project could use but which hide the internals, including how the data is organized behind the scenes*. Vibe-coding for the specific project requirements, instead of using the pre-existing third-party libraries, is now becoming a feasible option. The latter may be simpler (no features beyond the actual need), more flexible (easier to add new needed features), and the data/model behind could be more accessible.

    Looking for feedback on pros/cons and experiences along this.

    * I care for the data as it is can be longer-lived than the code itself.

    Thanks.

  • level09 8 hours ago
    I find it fuunny how "code readability" was the killer feature when humans had to maintain the code, and is suddenly negotiable now that they don't.

    Pre-AI bias is slowly dying.

  • elzbardico 20 hours ago
    So you have a chance to be able to read the absurdly barroque code AI produces.
    • chaidhat 20 hours ago
      you mean baroque-n code?
  • shibaprasadb 10 hours ago
    Python is the 2nd/3rd best language for almost everything. So I guess it helps.
  • ElFitz 16 hours ago
    > You used Python or TypeScript because[…]. because Rust, Go, C++, and many more would give you 10–100x the performance, but you paid for it: […] a build system that fought you.

    I would argue I spent more time fighting the TypeScript build system than Rust’s.

    But up until recently I only used either just often enough to never remember what magic configuration needed to go in my tsconfig.json and package.json to get TypeScript to work.

  • jackzhuo 19 hours ago
    I still use TypeScript because I know it best. When AI makes a mistake, I can find the bug much faster. For me, the speed of writing code doesn't matter as much as the speed of fixing it.
  • gchamonlive 20 hours ago
    Assuming you are thinking about software architecture and looking under the hood, you are likely to be reading much more code than before. Python is really nice on the eyes and you can easily get a good grasp of what the code is doing. Plus, it's dynamically but strongly typed, so what you see in the code is usually what you get.

    I think the rule of thumb is to use the tool that is right for the job and that you are going to be able to understand the output.

  • dsiegel2275 10 hours ago
    10-100x faster? Maybe for strictly IO bound applications - but if you are building a web app you won't see that performance as network latency dominates.
  • ngrislain 12 hours ago
    100%, I’ve been writting: Rust, Haskell and Lean 4 with great success with AI. E.g. https://github.com/typednotes/hale
  • kekpek 9 hours ago
    I also try to use Ruby because it's much more readable than anything else. And yes, still need to review and understand what code AI generated there
  • Havoc 13 hours ago
    I use a mix of both to try and leverage their advantages.

    Rust in most cases, especially for back end.

    Python when it's low risk (say monitoring dashboard or similar API heavy) or plays to python strengths (e.g. ML/AI - everything ML seems to be python).

  • sirnicolaz 16 hours ago
    Because the SWE benchmarks for LLM coding are done on python code bases, hence you are likely gonna have superior results
    • aesthesia 16 hours ago
      Yeah, this is a big part of it. Labs have been hill climbing on Python for years, plus AI devs are usually most familiar with Python anyways.
  • sega_sai 20 hours ago
    I think it is an interesting question what kind of programming language one needs for an era of agents. It is clear that the programming language that was designed for humans is not necessarily the best for AI-driven software development. I guess the qualities one would want is some formal correctness guarantees, high performance. A question is whether this language is Rust or it is possible to design a better new language.
  • cauliturtle 13 hours ago
    IMO, just use the language your know it well. It might be a little bit off topic, if you are going multiple platforms development now, such as backend, ios and android, will you go native now? or use cross platform languages? :D
  • sebastianconcpt 2 hours ago
    Precieh*Rust*emsely
  • postflopclarity 5 hours ago
    I've found Julia is quite a nice target for AI coding agents
  • tomashm 12 hours ago
    And with AI writing code, why use libraries, which makes us more vulnerable to 0-day attacks?

    Our simulation core components are pure Fortran, no libraries, all written by Claude/Cursor/Codex.

    • lexicality 12 hours ago
      I remember when having as little code to maintain as possible was an engineering goal. My professors were adamant that code reuse was a virtue. I had "less code = less bugs" drilled into me.

      I'm sure the new way is better though, given how much my boss seems to be tracking my token usage these days...

    • puelocesar 12 hours ago
      Honestly I cannot tell if this is satire or not
  • jlnthws 16 hours ago
    Why not use AI to speed up the Python runtime? V8 showed what focused engineering can do for JavaScript, and Astral showed how much room there is to improve Python tooling. The same tricks may not apply directly, but AI could definitely accelerate the work.
  • ryanolsonx 15 hours ago
    Two things to consider: - When reading generated code, which programming language would be the most readable to you? - Which programming language guides AI to write correct code using language features or guardrails?

    There you will find your answer.

    • voxelghost 15 hours ago
      additionally (but related), what programming language is the easiest most efficent for you to reason about and feed back to AI in English (or your native language)
  • brightball 19 hours ago
    Didn’t Tencent do a study comparing AI performance across about 20 languages showing that Elixir was the top performer?
    • jeremyjh 19 hours ago
      Once you are over a certain threshold it’s more about the average quality of training data than the quantity.
  • yalogin 19 hours ago
    Isn’t the answer usually - because the same ai said python is the right language for it?

    Honestly I am in the exact same boat thinking why I don’t write in C if Claude is writing it. However I chickened out thinking if support for ml model or llm based flows doesn’t exist in c then it will be time consuming to go to python then.

  • b00ty4breakfast 15 hours ago
    I haven't read the article (because I hate Medium) but I reckon the biggest reason why LLM-assisted projects use Python is because there is a metric buttload of python code on the web to be slurped up and used as training fodder.
    • teo_zero 15 hours ago
      Now I'm curious: is a metric buttload much larger than an imperial one?
  • aryehof 17 hours ago
    This seems to assume that all there is, is systems software, tools and frameworks. Why ignore the elephant in the room - business / enterprise / line-of-business software? The case for Rust, Go, Gleam and Zig vastly changes for these versus Java or C#.
    • tasuki 1 hour ago
      Gleam is not a systems language. It's business / enterprise / parallel - running on Erlang's BEAM VM.
  • kx_x 19 hours ago
    AI/ML stuff: Python

    Personal: Rust/Go based on criticality of being able to glean code quickly, or memory usage, etc

  • seebeen 9 hours ago
    Typescript with strict types and ultra-tight eslint config can give Rust a run for its money.
  • nraynaud 13 hours ago
    Funny, along the same lines I asked an AI to write some wasm text. It was ridiculously bad and I had to intervene heavily to get something working as intended.
  • v3ss0n 13 hours ago
    So you are not going to review code? So you are not going to modify code? How many cases that AI Can always modify code correctly without human input?
  • Decabytes 8 hours ago
    I legit have had this same thought. If we are going to be writing programs with AI, We should be programming in a more performant and explicit way, with statically typed programming languages that are able to encode the invariants in the program, even if it requires programming in a way that would be tedious for humans
  • infinite_spin 1 day ago
    For me, whether it's AI or my own handcrafted artisanal code, the choice of language comes down to what has the least friction. This means I turn to vite/react for a lot of frontend requirements, and that the backend will be in nodejs or python, because those are easier for me to debug than writing an equivalent application in C++ or Rust.
  • trelliumD 6 hours ago
    i would say that object pascal is the clear winner in terms of readability, performance and ease of review bc of the static types.
  • QuadrupleA 19 hours ago
    Because AI creates unmaintainable messes in any language, and ergonomic ones help humans clean up.
    • janfoeh 19 hours ago
      Never mind cleaning up, you also have to understand the language just to judge and review the LLMs output. How else are you to separate good design and implementation from a bad one?
  • brainless 15 hours ago
    I build all my projects with Rust and Typescript (https://github.com/brainless). I had started learning Rust around 2023 but was progressing very slow. Since I left writing (or even reading) code line by line about a year ago, I build exclusively with Rust and Typescript. API types are generated from Rust. All my projects have a shared-types folder with a utility to generated Typescript types. I have a template that I use for each of my projects: https://github.com/brainless/rustysolid.

    I am from a Python background (11 years or so), PHP before that and C/C++ in college days. Rust works very well with coding agents. The amount of code in training data may be less but I would rather have the agent fight the compiler. Given that OpenAI and Anthropic seem interested in Rust, chances are that there is a ton of synthetic code generated with Rust.

  • bad_username 23 hours ago
    The article applies to a narrow case of a totally green field application that's going to be completely vibecoded. This is the only case where you reasonably can be indifferent to what the language is, and so you can abandon familiar Python and go with unfamiliar Rust. (If you _are_ familiar with Rust, the point of the article is moot.)

    This "fair weather development" approach feels very risky if that application is going to be exposed to any serious usage. There WILL be a situation when things break and the AI will be powerless to fix it (quickly) without breaking something else in a vicious loop. There WILL be a situation where things work fine and tests pass with 3 concurrent users but grind to a complete halt with 1000 because there is something O(N^2) deep in the code. And you NEED a human to save your day (which requires also proper architecture for that to be possible in the first place). If you don't plan for this, and just hope for the best, then you are building nothing more than a toy. And if you plan for this, then it matters again what the language is, and whether your team is proficient in it.

    Or maybe I too old fashioned or too behind the state of the AI art...

    • woeirua 19 hours ago
      You’re behind the state of the art. I’m not exaggerating when I say AI can diagnose and solve those issues for you too.
  • trelliumD 5 hours ago
    object pascal is by far superior in combining readability and performance. also the static type system is a huge bonus
  • isaisabella 19 hours ago
    Really agree. Python is popular because it's easy for human to implement. But now if the coder becomes AI, then Rust would be preferable for agent, just like Python for human. In addition, it brings better performance.
  • xnx 1 day ago
    For the utilities I write it is faster to iterate without having to compile. When I get to the point where I'm done adding changing features, and performance is an annoyance I can always ask the AI to "rewrite this in Go". (I've never gotten to that point.)
    • throwaway2037 12 hours ago

          > it is faster to iterate without having to compile
      
      I hear this sentiment from time to time. With a modern PC, IDE and Java or C# development toolkit, incremental compile times are insanely fast, even on very large projects. I can say with first hand experience: You can iterate as fast as Python. I don't know enough about Golang to say the same.
  • semiquaver 20 hours ago
    Great question. And I don’t think that Python, Ruby and PHP have a good answer. Scripting languages cater to human weaknesses. The 10-100x perf cost was never really worth it but now it’s impossible to justify.
    • throwaway2037 13 hours ago
      One question I have asked myself many times: What if Python had a strictly-typed mode? (It would require strict type hints/annotations.) Or there was a well-maintained branch that enforced strict types. I also thought that Python is a beautiful language, but the weak(er) typing is such a no-go (for me, personally) and causes ridiculous slowdowns at runtime due to this type flexibility. Finally, I know the answer why it has not been done: The ecosystem of 3rd party libraries is far too large to impose such a requirement. It would be the Python 3.0 upgrade all over again that took more than ten years to complete.
  • fulafel 15 hours ago
    What are some concise languages that are well received by humans (on par with Python)? Token efficiency might be a marked advantage.

    Clojure comes to mind at least.

  • john_builds 7 hours ago
    Honestly use code that you're familiar with. Being able to understand and debug is critical even with AI as it can fall into a weird loop
  • wraptile 14 hours ago
    Python is incredibly readable too. I can scan through LLM Python changes in minutes instead of hours of other languages.
  • LarsDu88 15 hours ago
    Thats exactly what i did with https://panel-panic.com
  • brontosaurusrex 13 hours ago
    Is there a blocker that would prevent future AI to write perfect assembler (for n architectures) in 1st pass?
  • frollogaston 14 hours ago
    "The Python ecosystem is increasingly a Rust ecosystem wearing a Python hat"

    If anything this is a reason to keep using Python.

  • cultofmetatron 12 hours ago
    I can't imagine a better output for llms than python. not because its particularly good. far from it, its got dynamic typing and more or less sets you up for runtime failure. however, it has probably the largest corpus of training data aside from javascript.

    Part of my worries that all this push to LLMs will marginalize niche programming languages from being used in startups since the lack of training data means falling back to hardcoding. a skill that I have a feeling will get increasingly niche overtime. I feel capitalism will basically render programming languages into a build artifact overtime.

  • odiroot 12 hours ago
    Because I don't only write the code. I will also read it, many more times.
  • 999900000999 23 hours ago
    So I can fix it when it breaks. I don’t understand anyone shipping real code without human review.

    Give it 2 years, the ‘Blame the AI ‘ incidents will increase. Like an unfaithful partner you’ll always return to it

  • Myzel394 14 hours ago
    Bullshit article. AI is not meant to be a black box, you just spit at it and it'll generate you a whole app and you don't even understand a single line. That WILL eventually fail. There was an article here some time ago where someone described it pretty well "use AI as autocomplete on steroids". Therefore, use any language you can actually debug well and know well and use AI as a tool, not as your replacement. And don't use it to port your electron app to rust if you don't know rust, Jesus.
    • axegon_ 13 hours ago
      > you just spit at it and it'll generate you a whole app and you don't even understand a single line

      So we are going to pretend this isn't happening everywhere now? And that it isn't failing on daily basis? I'm sorry but I've been saying this for years now and is my main arguments for not using slop machines: no one writes the code and no one reads the code. I can name dozens of fortune 500 companies where "tokens used" is used as a performance metric for developers, as in, more tokens = better performance, all code is written by slop machines and all reviews are made by slop machines, developers simply add "this is intended" in code reviews.

  • pvelagal 19 hours ago
    Nice perspective on languages in the AI era. I think AI should be used to build best performing and highly scalable software systems.
  • grigio 12 hours ago
    Because the training set is very good. Then ask to rewrite in rust
  • avereveard 1 day ago
    https://arxiv.org/pdf/2508.09101

    tldr 2% average point lost on Rust compared to python, gap vary by model, go has a better upper bound but opus had it 3% below python.

    benchmark is a bit old but research on why is there, article is just vibes

  • sakesun 20 hours ago
    Python is rather a UI for human logic comprehension. A mathematical notation of logics. Not a code to drive computer.

    And prompt does not replace that.

  • serf 23 hours ago
    1) python is one of the foremost trained upon languages

    2) it's practically verbose, not technically

    3) it resembles pseudocode

    4) batteries included shortcuts a lot of work

    all of these reasons are a boon for LLM work.

  • devin 20 hours ago
    Clojure is better. REPL + immutable defaults.
    • vegnus 2 hours ago
      Clojure or a Clojure-like will become the default for LLMs, or should be rather. It seems too good to ignore
  • dandanua 15 hours ago
    You can also use Julia. It is both easy for humans to write and read and for AI to generate because of the sane and powerful type system.

    However, I expect that in the future some new language will take this role of dual use.

  • fxj 23 hours ago
    One thing to consider:

    The (well-known) Sapir–Whorf hypothesis (if dont know it, look it uop) is often invoked for natural languages, but there’s a pretty direct analogue for programming languages: the language you "think in" during solving a problem biases which abstractions and idioms you reach for first.

    If you force an LLM to first solve a problem in a highly abstract language (Lisp, APL, Prolog) and only then later translate that solution to C++ or Rust, you’re effectively changing the intermediate representation the model works in. That IR has very different "affordance", e.g.

    - Lisp pushes you toward recursive tree/list processing, higher‑order functions and macro‑like decomposition. (some nice web frameworks were initially written in LISP, scheme, etc...)

    - APL pushes you toward whole‑array transforms, point‑free pipelines and exploiting data parallelism. (banks are still using it because of perforance)

    - Prolog pushes you toward facts/rules, constraint satisfaction, and backtracking search. (it is a very high abstraction but might suit LLMs very well)

    OK, and when you then translate that program into C++/Rust/python, a lot of this bias leaks through. You often end up with:

    Rule engines, constraint solvers, or table‑driven dispatch code when the starting point was Prolog.

    Iterator/functor pipelines and EDSL‑like combinators when the starting point was Lisp.

    Data‑parallel kernels and "vectorized" loops when the starting point was APL.

    In principle, an LLM could generate those idioms directly in C++/Rust. In practice, however, models are heavily shaped by their training distribution and default prompts. If you just say "write in Rust", they tend to regress towards the most common patterns in the corpus (framework‑heavy, imperative, not very aggressively functional or data‑parallel), even when the language would support richer abstractions.

    By inserting a "thinking" step in a different paradigm, you bias the search over solution space before you ever get to Rust/C++. That doesn’t magically make the code better, but it does change which regions of the design space the model explores.

    Same would also be true for python which is already a multi-idiomatic language. So it might be a good idea to learn a portfolio of different languages and then try to tackle a problem with a specific language instead of automatically using python/go/rust because of performance.

    Something to consider...

    p.s. how would a problem be solved when the LLM would have to write it first in erlang? Is it the automatically distributed?

    p.p.s. the "design pattern" of the GoF comes automatically to my mind, which might be a good hint to the LLM to use.

  • saltyoldman 3 hours ago
    I migrated to Golang. I think it's a much better language to write TUIs, REST and interact with LLMs.
  • jpgvm 14 hours ago
    This is a fairly crap post and the reasoning isn't sound but somehow the conclusion is still somewhat correct.

    You do want to use Rust with LLMs.

    The reason you want it is simple, it's more constrained.

    LLMs thrive on constraint and drown in freedom.

    The further you can constraint the solution space the more likely you are to end up with a solution you like/is actually good.

    Rust has several properties that make it really good for LLMs:

    * Really robust type system that is also very expressive, if guided LLMs can implement most of the invariants in types which substantially increases the chances of success.

    * Great compile time errors, the specificity and brevity (vs say C++ template expansion) means token efficient correction of syntax and/or borrow mistakes etc.

    * Protection against subtle errors at compile time, namely data races and memory safety issues.

    * Great corpus of well designed code and patterns, higher quality on average than some other ecosystems more favored by begineers/mass-market programming.

    * Stdlib is strong, small-ish number of blessed crates.

    * Context friendly, type signatures, errors, etc are all dense information.

    * Also bias towards compile time checks means less runtime tests which means less toolcall time (and less tests needed overall) which in turn makes the process a ton faster.

    I have been continually using Rust, Python and Kotlin since ~Jan this year and keeping track of my thoughts and I increasingly bias towards Rust now where I would have previously chosen Python or Kotlin instead just because I am lazy and I prefer the tool that the computer writes better so I have to write less lol.

  • jkausti 14 hours ago
    Python has during the recent years become unnecessary complex and especially the type hints system is so dumb and already have a lot of legacy syntax that confuse AI agents.
  • harrouet 8 hours ago
    I am surprised that Python is being "threatened by AI writing code", as per the article, but that the said-article never wondered if the AI was more efficient writing in Python or what else.

    I mean, the Python ecosystem is qualitative and generally well-documented. What if the AI spent 30% less tokens generating code than e.g. in Rust?

    Or is there a kind of information theory where, given the same goals / tests, the AI will spent roughly the same in any language?

  • stuaxo 20 hours ago
    Devs still have to maintain this code, the Python devs can definitely get the LLM to write (some kind of) Rust, but when it goes wrong and you hit the wall with the LLM then they can will have to learn Rust which might take a while, this sounds like a bit of a project risk.
  • sixdimensional 19 hours ago
    First one to vibe code a language for LLMs, by LLMs, wins a cookie?
  • caspper69 10 hours ago
    I share the sentiment unless you're working in an area where Python's library ecosystem is simply the better choice.

    When I vibe, it's C# all the way. Not a popular opinion on HN, but the LLMs are trained heavily on the language and are very, very good at it, plus with the 1-file-per-class organization, it can stay pretty clean. I mean, v10 LTS was just released, with all kinds of new language features, EFCore is still the best ORM I've ever used, with full support for SQLite, Postgres, MySql, etc. It just makes writing and reviewing code a pleasure. And the LLMs don't f*ck it up.

  • sgt 13 hours ago
    How about modern Java? Any experiences?
    • munksbeer 13 hours ago
      Disclaimer: I love writing production systems in Java. I was a C++ programmer for 10 years before moving to Java about 15 years ago. Java offers a virtually all in one package when writing large systems. You have a single language where you can write code that doesn't care to be the fastest possible, and you just rely on ZGC to do its thing, and it works. Or you can write GC free code with a mostly quite performant SoA type approach. You can do this in the same codebase, and developers don't need to know different languages to write either style of code. You then have one build system, one deployment system, an incredible set of observability tooling, etc, etc.

      So I might be biased, but with the correct curation of AGENTS.md files and skills, we're getting extremely good results using Claude Code writing Java.

      Another disclaimer: I haven't tried with another language, but we're happy with the results.

      • sgt 12 hours ago
        Would be interesting to find out what kind of production systems you write in Java and how you deploy / scale them. What DB backends you use, caching, etc. And whether you're also on Spring.
        • munksbeer 11 hours ago
          Always finance, trading systems. In the last 15 years mostly what they call "front office".

          At the moment, for the place I work, we deploy on AWS mostly (because that is where our target trading venues often are). DB backends are largely not something we think about too much, because all of that is done out of band of course as a final state. Our main persistence is through our "bus" using aeron, and everything starts and recovers from there. This is not your typical enterprise java. No Spring.

          • sgt 10 hours ago
            Ok that's quite interesting. Am I correct to presume this is crypto trading? I was under the impression most regular HFT is near the exchanges, or physically at the exchange in a DC. Unless it's an AWS Outpost or something.
    • DonHopkins 13 hours ago
      The reason why Java is such a terrible choice now is not technical, it's the Lawnmower Nazi argument:

      https://news.ycombinator.com/item?id=15886728

      masklinn on Dec 9, 2017 | parent | context | favorite | on: Larry Ellison allegedly tried to have a professor ...

      And remember,

      > Do not fall into the trap of anthropomorphising Larry Ellison. You need to think of Larry Ellison the way you think of a lawnmower. You don't anthropomorphize your lawnmower, the lawnmower just mows the lawn, you stick your hand in there and it'll chop it off, the end. You don't think 'oh, the lawnmower hates me' -- lawnmower doesn't give a shit about you, lawnmower can't hate you. Don't anthropomorphize the lawnmower. Don't fall into that trap about Oracle. — Brian Cantrill (https://youtu.be/-zRN7XLCRhc?t=33m1s)

      And

      > I actually think that it does a dis-service to not go to Nazi allegory because if I don't use Nazi allegory when referring to Oracle there's some critical understanding that I have left on the table […] in fact as I have said before I emphatically believe that if you have to explain the Nazis to someone who had never heard of World War 2 but was an Oracle customer there's a very good chance that you would explain the Nazis in Oracle allegory. — also Brian Cantrill (https://www.youtube.com/watch?v=79fvDDPaIoY&t=24m)

      • sgt 13 hours ago
        Sure, I don't like Oracle either but Java has moved beyond that years ago. There's a thriving community that does not rely on Oracle.
  • wwarner 19 hours ago
    Yes, and wondering why all the AI tooling is written in node.
  • pphysch 5 hours ago
    The article gives zero examples of someone even attempting to transpile something from Python.

    Numpy is two decades old. The lesson of "don't write everything in Python" is old news and LLMs just add a little momentum to that.

    Glue languages will always exist and Python is the best at it.

  • CivBase 23 hours ago
    This point only makes sense if you ship AI code without reviewing it. And if you're shipping AI code without reviewing it, you're going to run into much bigger problems than Python performance limitations.
  • FartyMcFarter 11 hours ago
    > The old open-source bargain had a positive feedback loop. You pick Python because it’s easy. You find a bug in a dependency. You fix it.

    > Agents broke that loop in a specific way: the unit of contribution shifted from the patch to the port.

    What does this even mean? Every time there's a bug we port the whole code to a different language instead of patching it? This sounds like absolute nonsense, and makes me wonder whether a human actually wrote this.

  • lenerdenator 1 day ago
    1) I still have to comprehend it.

    2) The corpus for the sort of applications I build is likely larger for Python than it is for C++ and Rust. Bigger corpus == more training data == better generated code.

    3) The bottleneck in the applications I run aren't in the execution of the code; they're in the database/network latency.

    4) I don't get anything extra for pushing Rust or C++ over Python.

    • pacificpendant 1 day ago
      If all the libraries are rust as the article claims having the top layer in Python probably makes even less difference.

      I tend to agree with the article’s statement about the value of the test code though, may even have been true before LLM code took over.

  • ospider 15 hours ago
    Because I have to maintain it.
  • docmars 6 hours ago
    Simple answer: it's easily reviewable by a human, which will always be an important step in the process of building software, no matter how many hype conferences tell you to stop checking AI output irresponsibly.
  • mikeweiss 18 hours ago
    So we can read and debug it if we'd like?
  • yangm97 18 hours ago
    If AI writes your code, why use frameworks?
  • PeterStuer 12 hours ago
    In my case: AI might write the code, but I have to architect the system, read the code, iterate and learn from it. Validate whether an approach makes sense, whether the chosen dependencies make sense, whether the testing is adequate and covers known failure paths ... good luck if this is a language and ecosystem you are not proficient in.
  • coolThingsFirst 7 hours ago
    This is the second time this week im reading that golang is that powerful.

    I thought it’s a poorly designed language with GC pauses so it surprised me that the ts compiler was written in it.

  • jaredcwhite 18 hours ago
    Code exists for humans to read and write. The fact it happens to compile and get executed by a computer system is a side effect.
  • jillesvangurp 15 hours ago
    The article is likely to offend some people. But it's not entirely without premise. I've been shifting my attention to using languages that I'm not great at. What is the right language is a choice that is no longer dominated by what you know well. That can still factor that into your choice but there are other considerations now. One of which is that you will soon be generating orders of magnitude more code than is physically possible to manually review for you. You need to compensate for your own inability to review all code with proper guard rails and automated verification.

    If you've managed software teams before, this won't be new. You just need to make sure the team does the right things. But you don't want to inject yourself on the critical path of everything. That's micro managing. People hate it and it's counter productive. You need to instead delegate responsibility and check that there is a good process with checks and balances that ensures things are done right.

    If you are vibe coding, one shotting, etc. you are essentially operating without guards rails. You won't catch mistakes that are being made. You aren't doing the due diligence of verifying that what was delivered is the same as what was being asked for.

    But if you do use guard rails, most of the engineering effort (i.e. your time) goes into building mechanisms to prove that what is being delivered is fit for purpose. And that needs to lean heavily on tools that verify things. Compilers, linters, test suites, headless browser based scenario tests, elaborate benchmarks, etc. Anything you can throw at this. The more the better. Even code quality issues are something you can catch and fix with tools. Code duplication issues are detectable. Poor cohesiveness and high coupling are simple metrics that you can optimize for.

    With AI in the mix, all of that gets run automatically and you create a feedback loop where any introduced problem is more likely to be caught early. If you are a good senior engineer, you would have been doing all of this anyway. Because it compensates for your own inability to not make mistakes. With AI, you just need to do more of it.

    I've dabbled with a few generated code bases in Go in the last few months. I have about 3 decades of experience with other languages. But not a lot of experience with Go. So, why did I pick it? It's not because I particularly like the language. It all looks a bit verbose and tedious to me and I've always preferred other languages. But since I'm not writing any code, I can step over that and make use of the fact that the compiler and build tools are really good and catch a lot of issues. By using Go, I'm leveraging the tool ecosystem around it. Which is really solid.

    Because I don't read/write Go code, I'm forced to treat the system as a black box. Which means I just test the hell out of it in any way I can think of. When I don't know how, I ask the AI to suggest me ways. And it does, and I make it add those as well. My little system has performance benchmarks, end to end tests for everything, scenario tests testing complex scenarios, static code analysis, race detection, etc. And lots of unit tests. If I find any issue, I get paranoid about what else might be broken.

    All I do is getting systematic about making it falsify the theory that it could all be broken by failing to produce a broken test scenario. I'm equally paranoid about code quality and technical debt. So, I make sure to check for that as well. Not manually of course. I simply ask the AI tool to do targeted reviews of code looking for duplication, adherence solid principles, etc. Any issues found are prioritized and addressed. With most quality issues, simply asking an LLM to look for such issues is surprisingly effective. Having guardrails just automates these checks and balances and makes them routine.

    My inability to review at the line level no longer matters that much. Worse, me reviewing tens/hundreds of thousands of lines of code is probably counter productive. Even in languages I know well, it would take ages. I'd be the slowest part of the whole engineering process.

  • super_user 15 hours ago
    Why not code in assembly?
    • kittikitti 14 hours ago
      I believe in the forecast that AI will converge to assembly (or machine code) in the next 5 to 10 years. However, there isn't a consensus on the ability for AI to program in interpreted languages like Python. In other words, AI needs to solve interpreted programming languages before directly generating low-level compiled languages.

      The friction is that most developers aren't trained to comprehend assembly or otherwise. The vast majority of CS programs don't do it seriously. Many don't really know the difference either, and even I would need a refresher before trying to debug assembly.

      I also think token cost restricts directly writing into assembly language. I've experimented with assembly output, as I'm sure many of us have, and can confirm small assembly programs produce more tokens as a result because of the lack of a standard library. However, because tokens are currently priced per million, I don't think it's a significant restraint.

      The hops right now are Python -> C -> Assembly . The trend is now Rust/Go/C -> Assembly. Perhaps in the future, there will be nothing in the middle.

  • bandrami 16 hours ago
    Because once you leave Python or JS the quality of LLM-produced code degrades catastrophically.
  • tontinton 1 day ago
    Also easier to ship a binary like a cli
  • yieldcrv 9 hours ago
    > why use Python

    when I said “the ecosystem” I didn’t mean of libraries and other developers, I meant of recruiters and hiring managers

    and whose humiliation ritual I could pass

  • grougnax 13 hours ago
    Rust is the way!
  • greenail 4 hours ago
    maybe now isn't the time but at some point it needs to be better understood which models are best for which types of programming or styles or languages. These models are not all the same for every language. The harness is also a factor. Python seems to be somewhat an exception today but that may not last. Another question might be: is there a pattern where you prototype in language X and implement in language Y. The models seem to be very good at porting code. I've used this pattern with python -> c++ SDL to squeeze out performance after I had a working gui. Has anyone measured this in terms of speed (wall clock) and in terms of token efficiency?
  • phplovesong 10 hours ago
    This hits hard, specially for PHP. Previously we had devs "who only knew" PHP, and once they started vibe coding most have started using Go.

    As a benefit i find that static types help AI to make correct/better decisitions than you see in PHP (where types are mostly only class types, nominal or primitive [lol no generics])

    But its pretty much true, i will forsee a fall in dynamic languges, as the usecase is pretty much void and null.

  • jackmott42 10 hours ago
    I recently started a game project in Rust aided by Claude Code because I asked myself that same question. I like Rust, but it is definitely harder than C# for me. But with the AI aid, doesn't seem to matter which language I use. So I take the performance and safety wins.
  • aussieguy1234 10 hours ago
    Writing is half of the equation. Once written, you have to maintain it. That usually required understanding the language.
  • jollyllama 11 hours ago
    Simplicity of deployment. No need to compile. People bitch about virtualenvs but they pretty much just work.

    Also, totally FOSS. Unparalleled library ecosystem (no, I don't buy into the hype about re-rolling all your own dependencies).

    Beyond that, Go is kind of nice, but the lack of a inheritance is stifling. Python has everything that's needed and very little that's not.

    Edit: Getting downvoted, probably because of the comment about virtualenvs. What's your alternative? .NET DLL's? The joke that is NPM? Go probably does this better, admittedly, but Python is practically one of the best out there.

  • aaroninsf 23 hours ago
    As always, "it depends."

    I'm using coding tools to build a complex media-intensive application. The approach I'm taking is to build a _reference implementation_ in Python, which is in its design specifics, constrained to use patterns which transliterate into the actual deployment targets (iPadOS/MacOS/Web).

    Why start with Python?

    Because I can read it, reason about it, and run it, trivially, which are Good Things for the reference. I intend to have multiple targets; I'd rather relate them to a source of ground truth I am fluent in.

    For what I'm doing, there is also a very rich set of prior art and existing libraries for doing various esoteric things—my spidey sense is that I'm benefiting from that. More examples, more discourse.

    I'm out of the prediction business and won't say this is either a good model for every new project, or, one I will need in another N months/years.

    But for the moment it sure feels like a sweet spot.

    Ask me again though, after the reference goes gold and I actually take up the transliteration though... :)

    • t43562 2 hours ago
      One can use a language as a sort of prototyping tool. I've once or twice done an implementation of some algorithm or idea in python and worked through all my conceptual errors and then done it again in C.

      I think it was a hell of a lot easier than working through all that change in C first.

  • BiraIgnacio 21 hours ago
    I dislike Go but I have to admit, it's a great language for AI generated code. Simple enough, it compiles quickly and it performs meh-well enough for most applications.

    One of the reasons I dislike Go is because it's easy for most engineers to write really low grade code with it. But AI agents would probably not write the best code in any language anyway, so not much is lost.

  • Terr_ 23 hours ago
    A somewhat contrarian/pessimistic view: The hardest thing in any future of LLM generated code is going to be the verification step, and especially types of verification that require humans which are going to be the most expensive.

    Therefore the "best" language is going to be whatever makes it easiest for humans to detect bugs, bad design, or that the "wrong thing" has been developed.

  • ElenaDaibunny 15 hours ago
    Honestly the bigger question is why we still write glue code at all. Let the agent orchestrate.
  • cryptica 12 hours ago
    Agreed. People should just use JavaScript since it's the one with the largest training set.
  • ReptileMan 13 hours ago
    >Smaller languages like Zig, Haskell and Gleam don’t have the same quality when AI-generated (for now).

    GPT 5.5 writes good haskell.

  • virtualritz 14 hours ago
    "Rust, [...], a build system that fought you"

    I started using Rust in 2018 and I've never used a build system that fought me less, ever, before or after.

    I stopped reading after that sentence.

    • IshKebab 14 hours ago
      I presume they were talking about C++? But yeah weird to bring that up in a comparison with Python of all things. A language with the build system equivalent of "stop hitting yourself". (At least until uv saved us from that bullshit.)
  • j45 14 hours ago
    The most common languages in the training corpus will output the most reliably.
  • GardenLetter27 23 hours ago
    The LLMs just churns out non-idiomatic slop in any language.

    It doesn't matter if the 800-line if statement is able to use pattern matching.

    There's been a lot of progress on making coding agents able to solve problems when they can easily evaluate in a closed loop, we desperately need something similar for controlling complexity and using relevant abstractions.

  • notepad0x90 15 hours ago
    if AI is generating text for you, why type?
  • mawadev 15 hours ago
    I stopped reading as soon as the claude C compiler was mentioned and it was claimed it can compile big projects. We all seem to exist on a different plane of reality
  • MantisShrimp90 16 hours ago
    Cute interesting take but I feel like it misses the point. Specifically, this makes sense where performance is necessary. Many projects have been written in suboptimal languages because the writers didn't want to learn lower level languages.

    Still, not ALL projects benefit from such an approach and there are times when yes python is the right tool. Not just due to readability of humans but the other qualities that make it really good for small, iterative apps.

    My take has never changed. Knowledge is cheaper than ever, but wisdom is as rare as ever. This is a great example of misunderstanding the former for the latter

  • lqstuart 19 hours ago
    Because LLMs fuck it up near-constantly and I need to review it
  • alfiedotwtf 19 hours ago
    … because model tool calls is non-standard, so Python as the only tool call available works wonders

    (Joke but also not a joke)

  • hirvi74 19 hours ago
    Interesting question.

    AI doesn't really write code for me, but I do use them to brainstorm/ask questions. Though, I do not use Python. I have never been a fan of the language. I still think Python is a perfectly serviceable language, but it would solve no (important) problems I have ever had better than any other language.

    I can see why Python is appealing to many people, and I applaud Guido for all the work and oversight over the years, but Python lacks a lot of the things I like in a language.

  • zb3 19 hours ago
    Because I can understand and edit that code by hand if I need to.
  • Computer0 20 hours ago
    I stay for the libraries
  • nsonha 14 hours ago
    people don't write python because of the language. Some do but that's not the main reason. They do to utilise tools only exist in the ecosystem. AI changes nothing.
  • globalnode 20 hours ago
    you still need to look at the code oneday so id say c++ still would be a preferred target language even for ai. i know i hear a lot about rust but im still getting the idea its a niche language overall. i know people love it and point out its advantages, but sometimes good enough is good enough (i.e. c++)
  • lacymorrow 7 minutes ago
    [flagged]
  • BrenBarn 17 hours ago
    If you can use Python, why have AI write your code? :-)
  • ActorNightly 23 hours ago
    a) Python (and Node) comprise the largest training set for all the models, so you are likely to get way better accuracy, especially with local models

    b) Python code is easier to introspect, and set up test harnesses around. And also extend in agentic frameworks

    c) LLMs are really good at translation. I can give it python code and it can translate it into C.

  • lynx97 12 hours ago
    I had gpt-5.5 translate microgpt.py into a C++ version recently. I had to steer/convince it to use data oriented design to avoid excessive pointer chasing, but the end result was as expected: Now 500 LOC instead of 199, but speedup was 100x. That speedup is definitely worth doubling the line count. And frankly, modern C++ can read very nicely, even compared to Python.
  • HacklesRaised 12 hours ago
    Slop begets slop?
  • triyambakam 14 hours ago
    > Go and ... strong type systems

    Lol good meme

  • shevy-java 16 hours ago
    > The strongest argument for Python and JavaScript was never the languages themselves. It was the ecosystems

    That's already a glaring mistake. People could say perl's CPAN is great. Well, it did not save perl from declining in the last 20 years.

    > The Python ecosystem is increasingly a Rust ecosystem wearing a Python hat.

    Without statistics to prove this, this claim is useless.

    Also, depending on Rust isn't that strange if a language is based on ... C. The only way I would disagree with such an argument were if Python were written in Python. But since it is syntactic sugar over C - just like ruby or perl are too - the argument to use Rust here is simply not different to using C. Perhaps Rust is better than C, but it is not fundamentally different. Whether Python were written in Rust or C is not a functional difference here.

    As for AI becoming our new Overlord: I honestly do not want to depend on US mega-corporations. I am not disputing the fact that AI has objective use cases. I am objecting this herd mentality of everyone putting an AI chip into their brain now.

    Damn AI slop zombies everywhere - it's like in the old B movie "They Live". But with less entertainment value than that. If they chew bubblegum then it is to slop up everything, not to kick ass.

  • xyst 20 hours ago
    Why use any general programming language at all? Just write it in assembly or binary. Skip the middleman bro
  • plexescor 15 hours ago
    Its unbeliebale that there are people possessing these types of thought processes, --- If AI can talk, why speak?
  • suis_siva 22 hours ago
    Let's go through some of the arguments, in no particular order:

    > Klabnik vibe-coded a new language in Rust, therefore Claude + Rust = Good.

    I argue the inverse -- Rust, being an ML-family language, is well suited for parsing, and language design (I know! Shocker!). In more moderate translation -- ML-style languages are good for parsing, interpreting and compiling code. Claude is not the magic here -- ML is.

    I would also add that I've had decent success vibe-coding+human-coding Haskell (contrary to the article). My experience is that if I can hand-write a rich set of types (blessed be IxMonad), I can throw Claude to fill in the blanks for the implementations. If I can design the data structures that make the program tick, bridging them is something Claude is awesome at. Again, no surprise -- it's intern-level work.

    The key distinction between C, Zig and Rust is that Rust is designed around types. C and Zig are more memory-oriented -- they really see most of your program as flat memory and you can kind of shoehorn a little bit of data layout in that flat memory. While this offers a large amount flexibility, this philosophy isn't well suited for proving out correctness. But again -- this doesn't mean they don't have a spot.

    When I was a junior at Tesla, I used to joke that senior staff had a VMs in their heads, because that's really how you analyze C programs -- you try to execute it in your head, with interesting inputs, but that's about it. Claude's head-VM is quite fuzzy and often makes errors.

    With Rust, if you design your type system, you prevent yourself from making dumb mistakes. Swap out "yourself" with Claude here and it's the same story.

    I've yet to see Claude design really nice type systems, fwiw.

    But the point is -- Claude is the enemy of beauty and correctness -- it's up to the SWE to design a type-system which will prevent it from doing so. To be clear, I obsess over type-systems personally, but that's not the only way -- incredibly rich, comprehensive, huge type systems, fuzzing, Antithesis, proptesting are all things you can do to minimize the impact of slop, and those are all valid things to do.

    ---

    > Code is not written by humans therefore it doesn't matter that you don't know Rust.

    Wouldn't say this was explicitly stated, but I definitely smelt this undertone throughout the article. If you don't understand the language you're reading, how can you understand whether the code in front of you is correct or not? If you have a systems engineer sitting across you to clean your PRs up, you can pass that responsibility onto them, but what about when they give their two weeks?

    If all you know is Python, chances are you're going to make better software in Python than in Rust. Stick an `Arc<Mutex<T>>` everywhere and chances are your code will be slower, as a matter of fact. Use If you want to learn Rust, please join us! But if all you're trying to do is vibe-code better code -- do it in the language you know and can actually debug when shit hits the fan.

    ---

    > Anthropic C Compiler

    It is impressive that Claude is awesome at taking existing code and rewriting it, this is certain, but I'd like to repeat the exact same rhetoric that many have given -- rewriting =/= original authorship. Awesome, we have a C compiler, but we already had one, and we just rewrote it? Seems like a little bit of wasted electricity.

    To build on top of this, I am really happy that Bun is exploring Rust, and the Claude rewrite is truly impressive, but quite surprising at times, preserving strange anti-patterns (my name being said anti-pattern, teehee): https://github.com/oven-sh/bun/blob/ffa6ce211a0267161ae48b82.... It's hard to determine why Claude decided this -- I assume a really strict input prompt.

    Do note that the current stage of that PR is much better than what it was at the state of that commit, and obviously Jarred isn't merging blind slop, but that is still human-driven by someone who has an understanding of their product.

    My bet is actually that _rewrites_ of already-functioning, well-tested code, are likely to be more common as time progresses. I think that's what Claude is really awesome at, and I think Claude can often achieve 80-20 improvements through rewrites. Again, Claude alone will not be a silver bullet -- it won't generate data-oriented programs if the source material wasn't data-oriented. It won't optimize for cache coherency, if the source didn't, but moving from Python to Rust alone, with more-or-less the same code structure, you're likely to see improvements by virtue of common operations being memory-coherent and avoiding the GIL and so on.

    ---

    > A C compiler written in Rust used to be a graduate thesis. It isn’t anymore.

    Come on, this is disingenuous -- a simple C compiler is a 1-day long project. LLVM is a graduate thesis (and for good reason). Copy-pasting prior-art is academic dishonesty and Claude does a lot of that.

    ---

    For transparency: I work with Noah.

    EDIT: Wanted to add that not a single line of my comment was AI generated.

  • Steinmark 3 hours ago
    [flagged]
  • makbar890 1 hour ago
    [dead]
  • hpcgroup 7 hours ago
    [flagged]
  • xiaosong001 6 hours ago
    [flagged]
  • theuniverseson 5 hours ago
    [dead]
  • hona_mind 15 hours ago
    [flagged]
  • asn_tech_2019 8 hours ago
    [flagged]
  • luisb_24 5 hours ago
    [flagged]
  • RandyRanderson 18 hours ago
    [dead]
  • haltonlabs 15 hours ago
    [dead]
  • nikhilpareek13 14 hours ago
    [dead]
  • Charlotte_Wang 14 hours ago
    [flagged]
  • xms17189 11 hours ago
    [flagged]
  • Maverick_G 10 hours ago
    [dead]
  • andrew_kwak 18 hours ago
    [flagged]
  • r2vcap 20 hours ago
    [dead]
  • th1sisoldnews 20 hours ago
    This idea is already being taken to the next step in labs; why generate code?

    When I run a game I don't care of the dev used C or whatever. Only programmers care about the syntactic representation.

    I need the machine code/byte code patterns/geometric/color gradient data.

    Eventually Python will be what you see on screen but no cPython interpreter program as we know it will be running

    The model will have an internal awareness of the result to return without running an actual REPL

    https://dev.to/zijianhuang/prompt-to-ai-generated-binary-is-...

  • cavemanDigAI 18 hours ago
    [flagged]
  • kitbot 14 hours ago
    [flagged]
  • nomoreusernames 8 hours ago
    [dead]
  • ivolimmen 3 hours ago
    Writing a poc requires speed in development and you want it to be thrown away when the poc is done. So I say we should all do poc's in BASIC. /s
  • hoxihan 13 hours ago
    [dead]
  • SadErn 17 hours ago
    [dead]
  • rawoke083600 4 hours ago
    [dead]
  • wotsdat 20 hours ago
    [dead]
  • onlytue 20 hours ago
    [dead]
  • mohamedkoubaa 20 hours ago
    Perl might just be the most token efficient language
  • maxdo 17 hours ago
    Despite lots of influencers (Karpathy) I personally trust, the industry is taking the opposite turn for a reason:

    https://platform.claude.com/docs/en/agents-and-tools/tool-us... Also Claude Cowork, etc.

    1. You don't need compilation... run and test faster. Compilers were primarily built to prevent human error, and only very secondarily to guard your business logic.

    2. Your validators quite often need to evolve. With Python or JS, this is a pydantic edit + run. Imagine 3–4 iterations of the same in Rust?

    3. Composition. The entire cycle of software changes. An agentic system takes orders from a human, reads some kind of cache and snippets, writes/combines snippets, tests it, runs it, and fixes it. This almost pushes you toward snippets the size of a function, which still need to be covered with tests. I can easily build 10 function-sized Python files and write an agent that will mix and match 3 of them into a final result. With a compiled language, you'd need to compile 10 times — or store the binaries and think about what platform they'll execute on, etc.

    I love the fact that the author is questioning this. No doubt the market for your favorite language will change. 80% of languages will go away — there is no market anymore for such a big variety of languages.

    • goatlover 17 hours ago
      > 80% of languages will go away — there is no market anymore for such a big variety of languages.

      That's kind of sad, but so many older languages have been declared dead only to hang in various niches or out of sight for decades.