I have very wide error bars on the potential future of large language models, and I think you should too. It’s possible LLMs basically lead to AGI, and it’s also possible they platteau.

I wouldn’t be surprised if, in three to five years, language models are capable of performing most (all?) cognitive economically-useful tasks beyond the level of human experts. And I also wouldn’t be surprised if, in five years, the best models we have are better than the ones we have today, but only in “normal” ways where costs continue to decrease considerably and capabilities continue to get better but there’s no fundamental paradigm shift that upends the world order.

Go to Source