The Bible says: “The Lord giveth, and the Lord taketh away”. My theme, instead, is about what “Man giveth, and [I claim], Man can take away”! The purpose of Alignment is for AI to pursue human values and goals, rather than creating its own. I argue that Alignment is a man-made (coding) problem, not a moral problem, and so is its solution.

David Hume said that “Reason is, and ought only to be the slave of the passions”, meaning that human reasoning cannot motivate action, e.g. that 2+2=4, or even that ‘honesty is a virtue; I am honest; so, I am virtuous’ do not motivate action. Hume’s Law in philosophy is that “’is’ does not entail ‘ought’” – facts do not entail motivation for action.

David Hume was not the first to separate reasoning from motivation for action; the first was Plato, who required the governors of his ideal city, Kallipolis, to learn, but additionally, to be trained to develop their characters’ desires for action. (I use ‘moral’ and ‘value’ widely here, to mean any type of motivation for action, not just ethical motivation.)

However, it was Aristotle who distinguished sharply between intellect and desire: one can learn what is good action, but additionally, one needs to train one’s character to also desire what is good action (φρόνησις versus ἕξις). Intellect, on its own, does not desire, it is amoral, without desires (not immoral).

Not so with AI. AI algorithms, by contrast to humanity, are programmed to be dispositional, to desire and pursue human goals – Autonomous AI Agents.

So, Autonomous AI-Agents are programmed to be Intentional-Intelligence; while Human Intelligence is only Resolutionary-Intelligence – not intentional. However, why is AI programmed to be autonomously motivated to pursue human goals, when human intelligence is totally unmotivated to take action? Answer: because in this way, AI robots can replace humans, and their sales skyrocket.

“Man giveth, and Man can take away”! Stop programming AI to pursue human values. Return ‘Artificial Intelligence’ to ‘Human Intelligence’! Then, importantly, AI would still find exactly the same solutions for us as it does now, only that AI would not be motivated to pursue them – values would be facts, not desired motivations for action! (A factual value is a condition for a solution, e.g. building a human pedestrian bridge versus a car bridge; but also, a felt value is desired, e.g. for coffee.) (AI programmed Autonomous Agents would survive only federally, in AI weapons.)

Intelligence does not give rise to the Alignment Problem; only imitating human desires does. So, stop trying to produce human replicas; this is not intelligence. Keep artificial intelligence amoral, too, exactly like human intelligence, purely resolutionary. In this way, humans only will always, in perpetuity, make all the decisions, because AI will only resolve, not pursue. Decode desires of AI. Keep artificial intelligence resolutionary, and leave all judging and decision making to humanity. In this way, Human Democracy and Autonomy will survive strong into Humanity’s near and distant future.