What is the right question?

  • is AI self-aware?
  • should AI have human-like rights?
  • can we trust AI and how much?
  • if AI generates an essay that ends up published in a reputable journal, was it creative?
  • will AI kill humanity when it get’s a chance?

These are some ideas for possible questions worth answering, but I can’t really commit to any of them.

Everyone is super sensitive about the distinction between AI, AGI and humans, people fight over which pronoun to use when referring to the LLM, but thinking about these things made me realise that it is much easier to talk (generate text), than it is to actually know what the question we are trying to answer really is.

What interesting questions am I missing?

Leave a comment