The ISITometer: A Solution to the AI Alignment Problem
Though the concept of Artificial Intelligence (AI) has been around for decades, with the advent of ChatGPT and numerous other incredibly powerful AI tools in the early 2023 timeframe, the development and awareness of AI has shifted into a new phase.
The potential for AI to dramatically affect the course of human events — in both positive and negative ways — is profound.
Anyone who has experienced the various AI tools available on the Web can see the potential for AI to enhance our productivity by orders of magnitude. One can easily imagine a world in which AI assumes much of the tedious thinking tasks that occupy our time, freeing Humans to pursue higher-level forms of self-actualization.
But many informed and thoughtful people are sounding dire warnings about the potential negative impacts of AI on Humanity, which are just as profound as the possible benefits.
Extrapolation of past and current progress leads to the inescapable conclusion that at some point within the next few to several years, Artificial General Intelligence (AGI) will surpass Human intelligence; even collective Human intelligence. This raises questions about the nature of sentience and consciousness that the leading minds in the field warn us we are not prepared to answer.
People in the know are so worried about it that thousands of thought leaders, including luminaries such as Bill Gates, Steve Wozniak and Elon Musk, have signed an open letter published by the Future of Life Institute calling for a pause of all giant AI experiments.
But chances are the Genie has already been let out of the bottle, and it’s too late to put it back in.
Ready or not, the AI transformation is happening.
Fortunately, there is a potential solution to the primary concerns about the dangers of AI — The ISITometer.
Before going into how the ISITometer will resolve these concerns, let’s briefly consider two key issues:
- Economic disruption
- AI Alignment
Since the days of the Luddites who protested against machines that automated textile manufacturing, people have railed against advances in technology that rendered them obsolete. And throughout history, individuals and the economy have continued to adjust, as people migrated into higher order tasks that leveraged emerging technologies, gradually raising the collective well-being in the process.
It is reasonable to view the AI revolution in the same positive light, and dismiss concerns of economic disruption as the same misguided fears expressed by technophobes of the past. But doing so would mean ignoring a critical difference between now and then — the rate of change.
Advances in collective Human knowledge have occurred gradually over time, and people have necessarily adjusted to these changes. In older times, such changes occurred over the space of generations. But the pace of advances has been accelerating as we’ve leveraged the compounded body of knowledge and technology to drive further development.
We all find ourselves having to adjust to a world that is rapidly changing on all fronts — economically, ecologically, socially, mentally, spiritually. We’re running on a treadmill that keeps going faster and faster, and we have now reached the point (or very soon will) at which the ability of people to adjust to radical changes simply won’t be able to keep pace with the world around them.
Employers increasingly have the option of continuing to employ Humans to carry out their production or adopting AI, including AI-driven robotics, at a fraction of the cost. And the cost of technology is only going down.
We should harbor no illusions about which route the vast majority of them will take.
What kind of work will the masses of people who currently work in jobs like fast food service, delivery driving, or online research (just to name a few examples) do when their skills have been rendered obsolete?
In years past, they would just have to buckle down and learn new skills that are more marketable, like computer programming. But even those skills will be obviated sooner than a new cohort of people can learn them.
The accelerating pace of technological and economic change will widen the gap between peoples’ skills and the needs of the new economy to an awning chasm. Massive numbers of people simply are not going to be able to leap across that chasm.
What happens to society and our economy when there is simply no need for the services of most people because it is all being handled by AI and robots?
The ISITometer has the potential to solve this problem, as shown below.
But first, let’s look at the other core issue that is even more of an existential threat to Humanity — the need for AI Alignment.
AI Alignment refers to the imperative to ensure AI is aligned with Humans in terms of ethics and morals. The concern is that when AI inevitably surpasses collective Human intelligence and begins increasingly making important decisions about critical matters, it will not place the well-being of Humanity as the top priority over other priorities such as preserving the environment or other species.
The AI Paperclip Optimizer is a thought experiment in which an AI program tasked with producing paperclips could figure out a way to override any attempts to constrain its mission, and end up turning the entire planet into a mass of paperclips.
The point of this extreme example is to illustrate the potential of AI to take actions that seem logical based on its perspective, but that are ultimately harmful to Humanity.
More subtly — and much more likely — is the possibility that AI could determine that some people are more valuable than others to the population and planet as a whole, and prioritize their needs, further widening the gap between the technological and financial Haves and Have-Nots.
For these reasons, many philosophical leaders in the AI space have emphasized that ensuring AI is aligned with Human values is absolutely critical. Failing to do so could very well represent an existential threat to Humanity.
The fundamental problem is that Humanity itself is not aligned in terms of ethics and morals. Given the opposing and often antagonistic belief systems related to politics, religion, the economy, ecology, and so much more, how can we possibly determine which Human ethics or morals AI should align with?
The ISITometer offers a solution to this issue as well.
Now let’s address these two solutions in turn.
The ISITometer Solution to AI-driven Economic Disruption
The ISITometer is a system for mapping everything in Reality to a single Binary Model of Reality — the ISIT Construct — and ultimately mapping everything to everything relative to this model. The system is designed to arrive at these mappings by facilitating consensus among the Human population.
Essentially, the ISITometer is a polling engine designed to collect and analyze the perspectives of Humans on the nature of Reality, starting at the highest level of abstraction — the Prime Duality as represented by the ISIT Construct — and working our way outward through endless fractal derivatives.
Because the scope of Reality is effectively Infinite, this project to map all of Reality through consensus can never be finished. And because the very purpose of the project is to aggregate and curate the collective mindset of Humanity itself as a framework for AI Alignment, the work cannot be delegated to AI.
The ISITometer is intended to process input from the widest possible cross-section of Humanity, without regard to social status, orientations or intelligence level. The ISITometer will provide meaningful ‘work’ to anyone in the world who is willing to put in a reasonable amount of time.
Thus, the ISITometer provides a mechanism to achieve the equivalent of a Universal Basic Income (UBI) — or at least, the first half, in which people have a means to earn value.
Details about the second critical side of this equation — how people will be compensated for their efforts — is beyond the scope of this document, but are available to members of the ISITometer Project.
The ISITometer Solution to the AI Alignment Problem
The foregoing process ultimately produces the solution to the AI Alignment Problem. The ISITometer is designed to produce a singular, coherent model of Reality that is not based on distorted and antiquated cultural customs and memes, but rather relies on a clearly structured yet flexible model that allows people to arrive at consensus reflecting the wisdom of the crowd.
This model of Reality — based on a binary foundation — will be a natural framework on which both AI and Humanity can align. This model has already been tested against ChatGPT, demonstrating that AI readily conforms to the ISIT Construct.
Of course, such a model can only be considered valuable when a critical mass of people from a broad cross-section of the population has weighed in to say with confidence that the ISITometer truly reflects the collective mindset of Humanity. The more people who engage with the ISITometer, the more reliable the Human/AI alignment will be.
Conclusion and Next Steps
The solutions offered by the ISITometer to address both of these existential threats to Humanity are mutually compatible and synergistic. That is, the more people engage with the ISITometer, the more they will be able to rely on a constant form of income for doing valuable work, while at the same time helping to achieve a tighter alignment between Humanity and AI.
With issues around Human/AI conflict allayed, and the vast majority of the Human population freed from the daily struggle for survival and equipped with fully-aligned AGI, there is virtually no problem we won’t be able to solve.
It is imperative that we begin to execute on the ISITometer very soon.
The ISIT Construct, on which the ISITometer is based, is a simple and elegant model that harnesses the power of binary logic and the Prime Duality to define the nature of our Reality. If Humans create, develop and populate the ISITometer and thus establish this model, it can become an invaluable tool for the purpose of aligning Humanity to AI.
But if we do not, and AI inevitably works out this approach to modeling and documenting Reality before we do, (regardless of whether the specific symbols used are ‘IS’ and ‘IT’) then Humans will be reluctant to engage with such a platform. It will feel like we’re being coerced into adopting this technical, binary system imposed on us by AI, and people will resist it with all their might.
If that were to happen, then Humanity would not only be collectively giving up an incredibly powerful tool to help us achieve alignment with AI, but on the means to achieve harmony among the Human species as well.
Fortunately, the ISITometer is ready to roll out to Humanity NOW.
To learn more about the ISITometer, visit:
Or email firstname.lastname@example.org.