In today’s world, knowing what to ask is more important than having all the right answers. This has been a common maxim for a long time, but it’s more relevant now than ever before.
LLMs are over-powered autocorrect tools. They have what I call pseudo-intelligence. That is, they are designed to predict what the next word/token likely is, but they lack true understanding of the matter at hand. As such, it’s important that we use LLMs as a tool to empower human endeavour rather than as a means of replacing human thought.
Many people are still waiting for AI or LLMs to generate novel solutions to age-old problems like curing cancer or ending world hunger. To me, this is like sitting in front of a full-course meal, hungry, and complaining that you can’t start eating because your drink hasn’t arrived. I believe significant good can be achieved by understanding multiple domains, a feat previously limited by our finite time to deeply comprehend only a few fields. Today, with the right prompts, LLMs can help us grasp much more, a privilege once reserved for those fortunate enough to be in environments with experts capable of simplifying complex ideas. I think that’s all we need at the moment. While it’ll be awesome to have models capable of novel thought, we need ourselves to understand more and the models are here to help us do that. Just like cooking, the more ingredients we are aware of, the more interesting the meal can be (a proper understanding will lead to a proper usage or exclusion of ingredients).
to be continued
Disclaimer: The ideas presented in this post are solely my personal perspective and have not been substantiated by any verifiable evidence. Please form your own opinions on such matters.