On November 30, 2022, a company named OpenAI introduced their artificial intelligence tool ChatGPT to the world. ChatGPT was an AI tool that was freely available to the public, and it caused an instant sensation. People were intrigued by the potential of ChatGPT to generate meaningful content quickly and easily, and even as a way to help solve a wide range of problems. It shot up to a million users within five days of its launch, and people were hailing it as a development as significant as Google, or even the Internet itself.
When I learned about ChatGPT not long after it was launched, the first thought I had was to wonder whether it would validate the ISIT Reality Model.
I signed up for ChatGPT as my anonymous avatar, Mentor, and began running queries to see if AI would be able to assign dualistic word pairs accurately to the ISIT model. It only took a few minutes before I realized that ChatGPT was the perfect tool to validate the model.
After training it with just a handful of word pairs I began prompting it with dualities I had set up in the ISITometer, or was planning to set up. The result was exhilarating!
One by one, I saw that ChatGPT agreed with my own assignments. Furthermore, I trained it to provide a reason for each of its associations, which were generally in agreement with my own thought process. In fact, rather than just feeding it dualities and have it match them, I was able to prompt ChatGPT to generate new dualities itself and map those.
Some of the dualities it generated were ones that I hadn’t even considered, and the ChatGPT assignments on those made sense to me.
It wasn’t perfect. Out of about 160 dualities I ran through ChatGPT, we agreed on 151 out of 160 dualities, or about 94%. Of the ones that we didn’t agree on, there were two key reasons:
One reason is that ChatGPT just got it wrong. For example, on the Hard/Soft duality, ChatGPT associated the word ‘Hard’ with IS and ‘Soft’ with IT. In my mind, it’s obviously the other way, and I’m confident humans who get the concept will agree. In fact, on this word I was able to make ChatGPT recognize its own logical contradiction, and it changed its answer. But then later, it reverted back to its original assignment.
There were only three that ChatGPT got clearly wrong. On the other six, it was a judgment call because the concepts were a little too obscure. These were the ones that humans may also disagree about. For example, for the duality Apparent/Obscure, ChatGPT associated Apparent with IS and Obscure with IT. That was a pretty logical choice considering several of the other dualities like Public (IS)/Private (IT). But I was thinking of it in terms of IT only being the surface level appearance of what things are and IS being more subtle and not obvious.
I still think my interpretation is closer to right, as I obviously do on the other six. And I suspect that humans will come around to the same conclusions I reached on these, as will ChatGPT once it is sophisticated enough to absorb the entire concept.
For now, ChatGPT is still at a relatively rudimentary level, and it’s already at 94% agreement with the developer of the model, and was only clearly wrong about 2% of the time. That’s remarkable!
But most gratifying to me is that now I have solid validation that I’ve been on the right track all these years. My purpose for developing the ISITometer originally was to provide a way for humans to understand and validate the model, and then to expand on it. I was never able to get strong validation from the small circle of friends and family I ran it by, so all these years I’ve been working on this project without really being sure if I was on target, or just deluding myself.
Now I know.
The ISITometer will reveal the fundamental nature of Reality to those who will take the time to learn and truly understand it.
This part of the ISITometer story brings us up to the present day, and the writing of this blog post.
Tomorrow I’ll start promoting the ISITometer to the world.