In the face of calls for moratoria on AI research, it is clear that society wants to govern AI. In the United States, we are governed by The Constitution. It lays out founding principles and inalienable rights, since amended 27 times, including the Bill of Rights. It defines the structure of our government into 3 branches and how they are governed by the people. Thus, we govern ourselves, democratically. How shall we govern AI? (Presumably, we don’t want to give AI the vote!)
Recent advances in AI have shocked our social fabric. There is fear of economic upheaval, sudden significant unemployment, or deep fakes and toxic bots swarming social media. Fear is easy, and understandable. Thank you, Darwin, God, et al. Flight commonly follows.
The rational, educated, adult reaction to fear which is not immediately threatening should be to think – without fear – before acting. Is the banning of AI by various social media sites, school districts, universities, etc. rational?
We think “yes” in some cases, such as trying to avoid coding bots from overwhelming Stack Overflow with misinformation. AI is not smart enough to provide reliable answers. And don’t hold your breath, please? The same is true for social media, where deep fakes and misinformation sewn by Russian bots can be overwhelming, but the more practical solution seems to be banning anonymity more broadly. Once you know “who” a user is, you can label or prohibit AI, but you can’t tell which is which otherwise. To put that another way, unless you validate who a social media user is, a ban is nothing but preening.
We think “no” in education, generally speaking. In particular, trying to stop students from using Chat GPT to help with homework or write assigned essays seems like a fool’s errand. It’s like banning calculators. First, it’s impractical. The student will use the calculator (or AI) at home and teachers will only find out when in-class behavior differs. Second, the days of Luddites are past. Today, calculators do much more than arithmetic and their use is permitted even in standardized testing, such as the SATs. Beyond calculators, we don’t have penmanship or typing classes anymore. The same will happen with AI in education. It will fundamentally change what and how we teach and why it matters. It may take as long as it did for calculators to be accepted, for printing to dominate cursive, or for typewriters to disappear, but it’s inevitable.
Various proposals for a moratorium on AI development have been proposed. For the most part these are also fearful reactions. They seem more rational, however, because they propose less action. They seek to freeze a moment in time to grasp time to think. We think that’s counter-productive, but it’s not unreasonable for society to make such a decision. But it can’t work for long. And it will be costly.
We think the rational path is to lay out a constitution governing AI. Just as the Founding Fathers did not put society on hold while they debated and drafted the constitution in Independence Hall, we prefer not to impose a moratorium on those who are making a living advancing and applying AI towards staggering improvements in world-wide economics, education, healthcare, and more.
Our corner of the AI community has adopted the notion of “constitutional AI” promoted by Anthropic. This notion of constitutional AI requires a provider of AI to define the constitution by which it will abide. The dominant aspects of such constitutions today are that it will be helpful, harmless, and honest. However the AI provider defines the constitution, they represent to its users that it will substantially conform to its constitution and that the provider is committed to improving such conformance and ameliorating any non-conformance.
That’s all well and good, and we could go into the details of the typical constitution at length, but it’s not good enough for society. Society needs all such constitutions to be governed by a constitution that we, the people, accept. This shall be the constitution for AI.
The debate by which this constitution shall be defined and ratified belongs, as all matters of law, in the bodies of our legislatures, influenced as always by the public (and, of course, special interests). It should, as with all matters of legislation, be out in the open and deliberate enough to allow for public discussion and protest before enactment, as may be appropriate.
For decades, experts foretold superhuman AI as a decade away. Today, “the singularity”, where AI becomes completely general, capable of superhuman performance in all regards, is again, supposedly, a decade or so away. Don’t believe the prognosticators. Past performance is indicative of future returns.
Today, our legislators have little understanding of AI. They, like most of us, can imagine what AI is and will become capable of. None of us can distinguish fantasy from reality when projecting AI forward.
What matters is that quick legislation governing AI inside the six-month moratorium some are calling for will be more fearful than thoughtful. If so, it will not allow time for many of us, especially our legislators, and eventually regulators, to learn the technical capabilities and limitations of the technology well enough to govern it wisely.
How and whether other nations govern AI is not within our control, however. And is a perhaps the only rational argument needed to defeat calls for any moratorium. The reasons are somewhat obvious. In any case, given history, international consensus is ever evasive, implying that any moratorium will be either ineffective or perpetual. The former is catastrophic. The latter is impossible.