How do we create a AGI that benefits us?
With the recent release of GPT-4.5, I’ve been reflecting on how Large Language Models (LLMs) should not be limited solely to System 2 thinking capabilities. In other words, we shouldn’t value LLMs merely as quick look-up tools for accurate information or logical reasoning. Instead, we should also appreciate their potential to understand human emotions and intentions, enabling them to better align with our interests.
GPT-4.5 can answer questions more concisely and demonstrates improved emotional understanding. Unlike the previous iteration, GPT-4, which often struggled to grasp the true intent behind questions and resorted to providing excessive information, GPT-4.5 is better at determining how to respond in a way that is most likely correct. Admittedly, this improvement might simply result from a system prompt like “be concise,” which could have been applied to GPT-4 as well.
After all, humans are a blend of emotion and rationality. While we may rationally understand what is true, our emotions often guide our decisions. Perhaps a more accurate term for “emotions” in this context would be “morality.”
Developing a sense of morality in AGIs is crucial. We need these tools to be ethically aligned with our interests, ensuring greater confidence that, as we empower them to act more autonomously, they will not cause harm.
Take a moment and think about thinking. When you do math problems, you toggle on that “thinking mode” that is meant for technical tasks. When you want to console someone who has went through a tough time, you toggle on that “emotional mode” that is meant for human interactions. You don’t use the same “brain tools” for these tasks. I think this can be generalised to literally everything we do.
So I find myself understanding what OpenAI is doing with this new release of GPT4.5. It’s not the most powerful model in the world, but it’s the best at human interactions at the moment. OpenAI is training multiple models that are specialised tools that are almost synonymous with the aforementioned “brain tools” of ours, and all they need is a little router (think the trunk of a tree that leads to the branches) to connect the user’s query to the appropriate tool. This, I believe, is how (if we are) AGIs will be made in the future.
To be continued. We live in such fascinating times…