Computational Law, Symbolic Discourse and the AI ​​Constitution

Leibniz sees computation as a simplifying element in law practice. And, yes, some things get simpler and more defined. But a vast and complex ocean will also open up.

What does it mean for AI?

How should one say Who should do? Well, you have to have some form of communication that both humans and AI can understand – and that’s rich enough to describe what people want. And as I have described elsewhere, what I think basically means that one has to have a knowledge-based computer language – Wolfram Language to be exact – and eventually one needs one. Symbolic full language.

But, OK, so people are asking the AI ​​to do something, like “go get some cookies from the store.” But what they say will certainly not be complete. AI must operate in some patterns of the world and with some rules of conduct. Maybe it can figure out how to steal cookies, but it’s not allowed to do so; perhaps people want it to follow a certain law or code of conduct.

And this is where the law of computation really matters: because it tells us how to provide that code of conduct in a way that AI can easily use.

In principle, we can let AI absorb the entire law arsenal and case histories, etc. and try to learn from these examples. But as AI becomes more and more important in our society, it is essential to define all kinds of new laws, and many of these are likely to be “innate computing”, no less, I suspect, as they would also be algorithmically complex to be usefully described in traditional natural language.

There’s another problem: we really don’t just want AI to comply with the law (wherever they happen); we want them to behave in an ethical manner, no matter what that means. Even if it’s in the law, we might not want our AI to lie and cheat; We want them to somehow uplift our society in line with whatever ethical principles we follow.

Well, one might think, why not just teach ethics to AI like we can teach them laws? In fact, it’s not that simple. Because while the law has been partly codified, morality cannot say the same. Yes, there are philosophical and religious texts about morality. But it is much more ambiguous than what exists in the law.

However, if our figurative language is complete enough, it can certainly also describe morality. And in fact, we’ll be able to set up a computational law system that defines the entire code of conduct for AI.

But what should it say? One may have a few ideas right away. Perhaps one can combine all the moral systems of the world. Obviously hopeless. Perhaps one can let AI just observe what humans do and learn from their ethical system. Similar to hopelessness. Maybe one could try something more local, where AIs transform their behavior based on geography, cultural contexts, etc. (think “droid protocol”). Probably helpful in practice, but a complete solution is difficult.

So what can one do? Well, maybe there are a few principles that one can agree on. For example, at least the way we think about things today, most of us don’t want humans to go extinct (of course, maybe in the future, having mortal beings will be considered too much. fuss, or whatever). And in fact, while most people think there is all sorts of things wrong with our current society and civilization, people generally don’t want it to change too much, and they certainly don’t want it to change too much. forced change.

Leave a Comment