The AI Ethical Dilemma No One is Talking About

At Rock West Solutions in Southern California, engineers can combine artificial intelligence (AI) with data analytics and signal processing to build a better medical imaging device. They can use the same technologies to improve audio signals, streamline logistics, and even facilitate better electronic payments. Yet the AI component of their work raises an ethical question very few people are willing to talk about. Check out data science courses in Bangalore to know more.

What is that question? It is one of how AI systems will ultimately be programmed. In a perfect world, a new AI system would be completely agnostic toward political, philosophical, and moral viewpoints. It would simply analyze proven facts and verifiable data in order to reach conclusions. But the world we live in is far from perfect. And so, we now find ourselves asking who ultimately will get to decide what AI looks like in the future.

The Ideological Battle at Google

If you need evidence of the ethical dilemma created by AI, look no further than Google’s recently created Advanced Technology External Advisory Council. It was created by Google as a means of self-policing how the company uses AI moving forward. Unfortunately, the council faces the very real risk of being dissolved before it ever does anything.

Google envisions the council as being a filter through which all things AI can be run before any decisions are made. The council would have input regarding Google’s use of everything from facial recognition technology to everyday internet searches. So what’s the problem? Some employees don’t like some of the members Google has chosen to put on the council.

According to the Republican Standard, one of the first council members on the hitlist is Heritage Foundation president Kay Coles James. It appears as though some 500 Google employees have signed a petition demanding that the company remove James due to her conservative political viewpoints.

Also on the hitlist is Trumbull Unmanned CEO Dyan Gibbons. Her fatal flaw is the fact that she and her colleagues have a history with the U.S. military. A number of Google employees are dead set against AI ever being used for military purposes.

Defining Good and Bad

While the internal battle going on at Google may seem like nothing more than politics to most of us, it runs far deeper than just conservative versus liberal viewpoints. Rock West says that the debate goes to the heart of how AI is programmed.

Artificial intelligence is neither sentient nor creative. In other words, an AI system is not aware of its own existence. It never can be. And in terms of being creative, it is not possible to put two AI systems together and have them create a third entirely separate system with its own innate intelligence and personality.

The fact is that AI systems can only do what they are programmed to do. Even the most intelligent among them has no control over its own existence. The system cannot decide for itself what is good or bad. It can only act on the parameters programmed into it.

This is a problem if future AI development is limited to the principles and belief systems of one group of people to the exclusion of all others. If only one viewpoint is allowed, then the resulting system will not be truly intelligent. It will be a limited system capable only of generating results that only one group of people approve of.

Artificial intelligence is a good thing in principle. But it creates ethical questions we have yet to answer. Until those questions are answered, we need to tread very lightly. Apply for data science course to learn more

Business