4 comments

  • rao-v 12 minutes ago
    I don’t really think this reflects the current era of challenges?

    The “enforcement layer” is the hardest and most important part, and is barely addressed.

    - is the answer structurally / syntactically valid?

    - is it appropriately grounded and evidenced?

    - is it accurate? In what ways does it fall short?

    Each of these should be triggering an agent to rework and resubmit etc. or failing that a disclosure to the user about how the answer falls short and should be reviewed / remediated.

    This feels like it’s from the era of trying to oneshot a good enough answer.

  • slashdave 1 hour ago
    > the information an AI system needs to produce accurate ... outputs

    I would have stuck a qualifier in there

  • r4ge 1 hour ago
    I feel like AI is going to be doing all the fun stuff and I will just left organizing the data and docs it needs to generate code.
  • tmpz22 1 hour ago
    Putting engineering after a term doesnt make it engineering.
    • jryio 13 minutes ago
      Software engineering is certainly not engineering. Even at the highest levels. Real engineering have infinitely more complex interactions in the physical world than symbolic institutions for machines.
    • slashdave 55 minutes ago
      Probably just using the convention started by the term "prompt engineering", which is forgivable.
      • sroussey 42 minutes ago
        not sure i forgive "prompt engineering"