Advent of Swift

(leahneukirchen.org)

79 points | by chmaynard 1 day ago

4 comments

  • ChrisMarshallNY 22 hours ago
    I've been writing Swift daily, since the day it was announced, and have shipped a number of apps, written in it.

    I have learned to like the language. It's not perfect, but comes closer than most. I've written in a lot of languages, over the years.

    My other language is PHP, which I use for my backend work. I've probably been writing that for over twenty years, but I still don't like the language.

    As I was learning Swift, I started this series of posts[0]. It's still ongoing, but I haven't added anything in a while, and the language has progressed, since the earlier posts.

    [0] https://littlegreenviper.com/series/swiftwater/

    • antman123 20 hours ago
      why don't you try swift on server?
      • stephen_g 20 hours ago
        As someone wanting to play around with it - is Vapor still the framework people recommend, or is Hummingbird the new hotness?
        • ezfe 20 hours ago
          My understanding is they both have their strengths. If you want to build everything yourself, Hummingbird seems like the way to go but Vapor is more batteries-included.
      • ChrisMarshallNY 20 hours ago
        Lots of reasons. The biggest one, is that I write stuff that needs to host on the most basic servers out there -usually cheap-ass shared LAMP hosting.

        It’s no big deal. I don’t really do much backend work, so PHP is fine for that.

  • antfarm 20 hours ago
    I did Advent of Code 2024 in Swift in a functional style, without using mutable state, custom data types (i.e. classes or structs) or loops.

    https://github.com/antfarm/AdventOfCode2024

  • kris-s 1 day ago
    >The string processing is powerful, but inconvenient when you want to do things like indexing by offsets or ranges, due to Unicode semantics. (This is probably a good thing in general.)

    This is being too generous to Swift's poorly designed String API. The author gets into it immediately after the quote with an Array<Character> workaround, regex issues, and later Substring pain. It's not a fatal flaw, a language backed by one of the richest companies in the world can have few fatal flaws, but AoC in particular shines a light on it.

    I really like Swift as an application/games language but I think it unlikely it can ever escape that domain.

    • frizlab 1 day ago
      > poorly designed String API

      I wholeheartedly disagree and counterpoint that all other String APIs are wrong (bold statement, I know). Accessing a random index of a String is a complex (slow) operation, and as such, should be reflected as complex in the code, especially since people usually think it is not complex.

      If you want an array of UInt8, just use that.

      The part about the regex I agree with. They are slow and that’s a shame. I do not personally use regex much though, and don’t think it should be done much in prod either, unless there are no other options, but that does not excuse a poor implementation.

      Regarding the domain, I recognize it seems to have difficulties escaping the “native iOS/macOS apps,” but IMHO it should not. It is a language that is simple to use, with a reasonable memory handling default (ARC), though it can also use the memory ownership model of rust. Generally speaking using Swift is possible everywhere. I use it personally for an app (native and web front, and back), and it is extremely cool.

      Its ecosystem is also becoming quite interesting. Most of the libs are from Apple, yes, but they are also very qualitative.

      All in all I think it’s shame Swift is not more used overall in the industry.

    • amomchilov 22 hours ago
      FWIW, AoC is very non-representative of real-world string manipulation problems.

      The AoC format goes out of its way to express all problem inputs and outputs in simple strings with only basic ASCII text, just for compatibility with the most programming environments. This is very different from almost all real-world problem, where the complexities of human language are huge.

    • happytoexplain 23 hours ago
      > poorly designed String API

      Nope nope nope.

      I have to agree strongly with my sibling commenter. Every other language gets it horribly wrong.

      In app dev (Swift's primary use case), strings are most often semantically sequences of graphemes. And, if you at all care about computer science, array subscripting must be O(1).

      Swift does the right thing for both requirements. Beautiful.

      OK, yes, maybe they should add a native `nthCharacter(n:)`, but that's nitpicking. It's a one-liner to add yourself.

      • tialaramex 21 hours ago
        I don't think Rust gets this horribly wrong. &str is some bytes which we've agreed are UTF-8 encoded text. So, it's not a sequence of graphemes, though it does promise that it could be interpreted that way, and it is a sequence of bytes but not just any bytes.

        In Rust "AbcdeF"[1] isn't a thing, it won't compile, but "AbcdeF"[1..=1] says we want the UTF-8 substring starting from byte 1 through to byte 1 and that compiles, and it'll work because that string does have a valid UTF-8 substring there, it's "b" -- However it'll panic if we try to "€300"[1..=1] because that's no longer a valid UTF-8 substring, that's nonsense.

        For app dev this is too low level, but it's nice to have a string abstraction that's at home on a small embedded device where it doesn't matter that I can interpret flags, or an emoji with appropriate skin tones, or whatever else as a distinct single grapheme in Unicode, but we would like to do a bit better than "Only ASCII works in this device" in 2025.

        • Someone 15 hours ago
          > I don't think Rust gets this horribly wrong

          > In Rust "AbcdeF"[1] isn't a thing, it won't compile, but "AbcdeF"[1..=1] says we want the UTF-8 substring starting from byte 1 through to byte 1 and that compiles, and it'll work because that string does have a valid UTF-8 substring there, it's "b" -- However it'll panic if we try to "€300"[1..=1]

          I disagree. IMO, an API that uses byte offsets to substring on Unicode code points (or even larger units?) already is a bad idea, but then, having it panic when the byte offsets do not happen to be code point/(extended) grapheme cluster boundaries?

          How are you supposed to use that when, as you say ”we would like to do a bit better than "Only ASCII works in this device" in 2025”?

          I see there’s a better API that doesn’t throw (https://doc.rust-lang.org/std/primitive.str.html#method.get), but that IMO, still isn’t as nice as Swift’s choice because it still uses byte offsets

          • tialaramex 13 hours ago
            > How are you supposed to use that [...]?

            It's often the case that we know where a substring we want starts and ends, so this operation makes sense - because we know there's a valid substring this won't panic. For example if we know there's a literal colon at bytes 17 and 39 in our string foo, foo[18..39] is the UTF-8 text from bytes 18 to 38 inclusive, representing the string between those colons.

            One source of confusion here, is not realising that UTF-8 is a self-synchronising encoding. There are a lot of tricks that are correct and fast with UTF-8 but would be a disaster in the other multi-byte encodings or if (which is never the case in Rust) this isn't actually a UTF-8 string.

        • zzo38computer 16 hours ago
          You can do better than "only ASCII works in this device", and making the default string type to be Unicode is the wrong way to do that. For some applications, you might not need to interpret text at all, or you might need to only interpret ASCII text even if the text is not necessarily purely ASCII; other times you will want to do other things, but Unicode is not a very good character set (there are others but what is appropriate will depend much on the specific application in use; sometimes none are appropriate), and even if you are using Unicode you still don't need a Unicode string type, and you do not need it to check for valid UTF-8 for every string operation by default, because that will result in inefficiency.
          • tialaramex 7 hours ago
            In 1995 what you describe isn't crazy. Who knows if this "Unicode" will go anywhere.

            In 2005 it's rather old-fashioned. There's lots of 8859-1 and cp1252 out there but people aren't making so much of it, and Unicode aka 10646 is clearly the future.

            In 2015 it's a done deal.

            Here we are in 2025. Stop treating non-Unicode text as anything other than an aberration.

            You don't need checks "for every string operation". You need a properly designed string type.

      • ks2048 21 hours ago
        I think using "extended grapheme clusters" (EGC) (rather than code points or bytes) is a good idea. But, why not let you do "x[:2]" (or "x[0..<2]") for s String with the first two EGCs? (maybe better yet - make that return "String?")
        • ezfe 20 hours ago
          Because that implies that String is a random access collection. You cannot constant-time index into a String, so the API doesn't allow you to use array indexing.

          If you know it's safe to do you can get a representation as a list of UInt8 and then index into that.

        • zzo38computer 20 hours ago
          I disagree. I think it should be indexed by bytes. One reason is what the other comment explains about not being constant-time (which is a significant reason), although the other is that this restricts it to Unicode (which has its own problems) and to specific versions of Unicode, and can potentially cause problems when using a different version of Unicode. A separate library can be used to deal with code points and/or EGC if this is important for a specific application; these features should not be inherent to the string type.
          • novok 15 hours ago
            In practice, that is tiring as hell, verbose, awkward, unintuitive, requiring types attached to a specific instance for characters to do numeric indexing anyway and a whole bunch of other unnecessary ceremony not required in other languages.

            We don't care that it takes longer, we all know that, we still need to do a bunch of string operations anyway, and it's way worse with swift than to do an equivalent thing than it is than pretty much any other language.

          • ks2048 17 hours ago
            I don't think you can separate String from Unicode - that's what a "String" is in Swift.
            • zzo38computer 16 hours ago
              In Swift (and in other programming languages) it does use Unicode, but I think it probably would be better not to be. But, even when there is a Unicode String type, I still think that it probably should not be grapheme clusters, in my opinion; I explained some of the reasons for this.
  • ChefboyOG 1 day ago
    I'm curious, in what niches are people using Swift for new applications these days? I've enjoyed working with Swift in the past (albeit in very limited capacities), but I haven't personally come across any Swift-based initiatives in a while. I had high hopes for Swift for TensorFlow, but it was ultimately killed off.