10 comments

  • infinitepro 16 minutes ago
    Unless I am mistaken, this would be the first heuristic-free model trained to play tetris, which is pretty incredible, since mastering tetris from just raw game state has never been close to solved, till now(?)
  • omneity 5 hours ago
    Related, I heard about curriculum learning for LLMs quite often but I couldn’t find a library to order training data by an arbitrary measure like difficulty, so I made one[0].

    What you get is an iterator over the dataset that samples based on how far you are in the training.

    0: https://github.com/omarkamali/curriculus

  • gyrovagueGeist 2 hours ago
    I've always found curriculum learning incredibly hard to tune and calibrate reliably (even more so than many other RL approaches!).

    Reward scales and horizon lengths may vary across tasks with different difficulty, effectively exploring policy space (keeping multimodal strategy distributions for exploration before overfitting on small problems), and catastrophic forgetting when mixing curriculum levels or when introducing them too late.

    Does any reader/or the author have good heuristics for these? Or is it still so problem dependent that hyper parameter search for finding something that works in spite of these challenges is still the go to?

  • bob1029 5 hours ago
    > To learn, agents must experience high-value states, which are hard (or impossible) for untrained agents to reach. The endgame-only envs were the final piece to crack 65k. The endgame requires tens of thousands of correct moves where a single mistake ends the game, but to practice, agents must first get there.

    This seems really similar to the motivations around masked language modeling. By providing increasingly-masked targets over time, a smooth difficulty curve can be established. Randomly masking X% of the tokens/bytes is trivial to implement. MLM can take a small corpus and turn it into an astronomically large one.

    • algo_trader 4 hours ago
      This is less about masked modelling and more about reverse-curriculum.

      e.g. DeepCubeA 2019 (!) paper to solve Rubik cube.

      Start with solved state and teach the network successively harder states. This is so "obvious" and "unhelpful in real domains" that perhaps they havent heard of this paper.

    • larrydag 5 hours ago
      perhaps I'm missing something. Why not start the learning at a later state?
      • bob1029 5 hours ago
        That's effectively what you get in either case. With MLM, on the first learning iteration you might only mask exactly one token per sequence. This is equivalent to starting learning at a later state. The direction of the curriculum flows toward more and more of these being masked over time, which is equivalent to starting from earlier and earlier states. Eventually, you mask 100% of the sequence and you are starting from zero.
      • LatencyKills 4 hours ago
        If the goal is to achieve end-to-end learning that would be cheating.

        If you sat down to solve a problem you’ve never seen before you wouldn’t even know what a valid “later state” looking like.

  • drubs 4 hours ago
  • someoneontenet 4 hours ago
    Curriculum learning helped me out a lot in this project too https://www.robw.fyi/2025/12/28/solve-hi-q-with-alphazero-an...
  • pedrozieg 4 hours ago
    What I like about this writeup is that it quietly demolishes the idea that you need DeepMind-scale resources to get “superhuman” RL. The headline result is less about 2048 and Tetris and more about treating the data pipeline as the main product: careful observation design, reward shaping, and then a curriculum that drops the agent straight into high-value endgame states so it ever sees them in the first place. Once your env runs at millions of steps per second on a single 4090, the bottleneck is human iteration on those choices, not FLOPs.

    The happy Tetris bug is also a neat example of how “bad” inputs can act like curriculum or data augmentation. Corrupted observations forced the policy to be robust to chaos early, which then paid off when the game actually got hard. That feels very similar to tricks in other domains where we deliberately randomize or mask parts of the input. It makes me wonder how many surprisingly strong RL systems in the wild are really powered by accidental curricula that nobody has fully noticed or formalized yet.

  • jsuarez5341 4 hours ago
    [dead]
  • hiddencost 5 hours ago
    Those are not hard tasks ...
  • kgwxd 3 hours ago
    Great, add "curriculum" to the list of words that will spark my interest in human learning, only for it to be about garbage AI. I want HN with a hard rule against AI posts.
    • yunwal 3 hours ago
      Are we really dismissing the entire field of AI just because LLMs are overhyped?
      • kgwxd 1 hour ago
        Believe it or not, you can visit more than 1 website. How about a guideline to put (AI) like we do with (video). I'm just sick of having to click to figure out if it's about humans or computers. They've hijacked every single word related to the most fascinating thing in the entire universe just to generate ad revenue and VC funding.
        • pessimizer 32 minutes ago
          The famous Hacker News website is about computers. It is also about ad revenue and VC funding. It was originally named Startup News, and its patron and author is the multibillionaire founder of a well-known "startup accelerator" called "Y Combinator."

          > Believe it or not, you can visit more than 1 website.

    • artninja1988 3 hours ago
      Why garbage ai? I thought it was a very interesting post, personally.
    • utopiah 3 hours ago
      > HN with a hard rule against AI posts.

      Greasemonkey / Tampermonkey / User Scripts with

      Array.from( document.querySelectorAll(".submission>.title") ).filter( e => e.innerText.includes("AI") ).map( e => e.parentElement.style.opacity = .1)

      Edit: WTH... how am I getting downvoted for suggesting an actual optional solution? Please clarify.

      • snet0 3 hours ago
        Notably this doesn't match the current thread.
        • shwaj 1 hour ago
          Could always run the posts through a LLM to decide which are about AI :-p
        • utopiah 2 hours ago
          Expand e.innerText.includes("AI") with an array of whatever terms you prefer.