Optimizations and Constraints

In the comments of https://mybrainsthoughts.com/?p=327, a discussion sprang up on goals and optimization that seems worth diving into further. That post covers some ideas on criminal justice and optimal sentencing approaches, and the idea that came up in the comments was using AI to optimize our laws, so as to take the human emotion out and allow for a more rational treatment of policy. This approach makes sense if you have an exactly specifiable goal (and plenty of data) – for example, it would allow you to minimize the economic impact of prison sentences, or to minimize the rate of crimes being committed (though its solution to the latter might simply involve getting rid of laws!). However, optimization does not seem to be the bottleneck for our society to make progress in this domain (or others). Rather, we lack agreement on the goals (i.e. ethics, moral principles). One person might have the goal of minimizing economic impact, while another looks to minimize the rate of criminal acts. Complicating things even further, goals are generally not this simple; in a domain like criminal justice, each person has a wide variety of goals, each with different importances, resulting in policy being a calculus of sorts rather than a simple optimization.

The key point is that there is no universally correct goal. Given a goal (or set of goals), there does exist a best possible approach, but the goals themselves must come from elsewhere. 

For us, these goals arise from our biologies, cultures, religions, and philosophies. The ones rooted in biology are the most universal (e.g. respecting parents and elders, caring for children, etc.), while the others vary more significantly across the globe. Objectively, there’s no way to evaluate which goals are “right” and which are “wrong”, as having goals is a required first step before making those kinds of evaluations. However, we necessarily operate subjectively, and from this perspective frequently look to align the goals of others with our own (as our goals generally demand universal application). 

We’re more accepting of this truth when it comes to other species, which have yet to move beyond their biological goals. When we see a lion eat a zebra, we accept that the lion was simply achieving its biological goals, and not being amoral. However, we expect more of other people because we know they have the same capability as us to derive “higher-level” goals from culture, religion, and philosophy. 

Returning to the original discussion, it seems the AI of today is much more like the lion than the person. It has a very specific goal (e.g. maximize chess win percentage, minimize image recognition error, etc.) that we assign to it, and its “biology” is structured so as to simply iterate toward that goal. It can be a useful tool for performing an optimization calculus around specific goals, but the goals themselves escape calculation. 

It’s interesting to think about what would have to change for our artificial systems to develop their own goals. A different type of architecture seems to be required, one where the system isn’t so rigidly designed around a particular optimization. For example, Alpha Zero was structured in a way which ensured that each incremental computation, on average, moved it closer to playing perfect chess. This could be done because the concept of “perfect chess” is mathematically definable, in a way that goal development is not. For the system to be able to develop its own goals (similar to what humans have done through culture, religion, and philosophy), far more generality seems to be required. It’s difficult to imagine these types of systems arising anywhere but in complex, resource-constrained environments involving heredity and mutation…

That being said, there seems to be plenty of reason for caution in developing these types of systems. Right now, we have a monopoly on the goals of the world; while our cultural, religious, and philosophical goals may often conflict, the shared grounding of our biology ensures a limit to the divergence. With artificial systems, however, there is no such guarantee. It seems we’ll need to be careful about the systems we create, else we may lose our monopoly (or even lose our say altogether). 

5 1 vote
Article Rating
Subscribe
Notify of
6 Comments
Inline Feedbacks
View all comments
Jon
2 years ago

I think this is an interesting idea. Obviously something needs to be done in regards to our criminal justice system when we have the highest prison population per Capita in the world, and there is such a disproportionate difference in sentencing between rich+poor and black+white. Not sure what the answer is, but I think AI could help streamline the process. I think I would trust AI more than a judge, because human emotions can come into play, both positive and negative, which takes away from the objectivity of the exercise.

Oz
2 years ago

I think another issue that people have with AI is the lack of transparency and understanding of the systems. People seem to be more ok with biased/error prone human decisions than those of AI, even if the human decision is more biased/error prone (thinking of potential laws around self driving cars for example).

Meanderingmoose
2 years ago
Reply to  Oz

I fully agree, and think this will be another obstacle for broader roll-outs of AI technology. In most domains, it seems there will need to be an “AI-in-the-middle” approach (with AI enabling human operators) before fully handing the reins to the AI, and “AI-in-the-middle” requires interpretability to be effective (generally). AI performance seems to have improved far faster than AI interpretability, so some catch-up may be needed before the technology takes hold in a deeper way. I found this series by OpenAI to be an interesting read on the current state of interpretability and some of the potential paths forward.

Eugene
2 years ago

It’s interesting how I’ve heard that we think improvement / progress involves “removing human emotion” but then also we want to “mimic the human brain” (apparently with neural networks?)…which inherently has emotions built-in 🙂

In the early Star Trek series, I wonder if more people agreed with Spock’s opinion, or with Captain Kirk’s? Emotions maybe inform logic and rationale, it seems, because we humans value emotion, and we use emotions to guage our actions (how did/will that action make me/them feel?)…