11 Artificial Intelligence Ethical Questions Answered
Ethical issues are more value-based and science may not apply in these kinds of conversations.
A major concern in the innovation sphere is the ethical implications of using new technologies and artificial intelligence systems. Technology that is not grounded on the basic human values and ethics is risky and outright dangerous! Picture a world where systems controlling nuclear weapons or drones do not follow certain laid down “codes” of conduct, pan intended. As the world continues in its journey of technological discoveries and advancements, experts in philosophical circles have relentlessly continued to force-feed the ethical implications of adopting any kind of artificial intelligence. AI systems have the capability of influencing human interactions both socially and physically.
Questions on ethics arise after a considerable amount of people start using or depending on AI. This is largely because these systems have some potentially harmful or powerful traits. Issues of ethics and possible repercussions should be well sorted out and discussed even before the new technology is initially put to use. In technology, adopting a more proactive approach is better than waiting to react after the fact.
Ethical issues are more value-based and science may not apply in these kinds of conversations. Philosophical minds are needed when trying to answer some ethical questions concerning the growing use of certain technologies.
Artificial Intelligence (AI) is one such technology that has raised a lot of question throughout its existence. Some questions even go as far as talking about Armageddon and how AI will eventually take over the world!
End of Days talk aside, here are a few ethical questions raised concerning the continued use of AI:
1. How do we allocate the wealth generated by AI-driven companies without a human workforce?
As evidenced, many companies are now employing a large part of production to AI systems and technologies. Companies in areas such as Silicon Valley are the front-runners in replacing a human workforce with AI. This results in more efficiency and reduced costs of production through expenses on salaries and remuneration. This means more revenue for such technologically advanced companies.
Since most economies are compensation driven, usually assessed using hourly wage, companies AI will pay less. This will in the process affect the spending patterns of the unemployed. Putting more thought on a post-work society by structuring a fair post-labour economy is the only way to maintain a stable economy in the future.
2. What happens to the human workforce after AI?
What happens to employed labor once jobs become fully automated by AI’s?
Take for instance the trucking industry in America that employs millions of individuals. If or rather when self-driven trucks that were conceptualized by Tesla’s Elon Musk become a reality, where will these truck drivers go? This is just one example, in the coming decade office workers may also be replaced by more efficient machine technology. We are heading to a more cognitive intensive industrial era where most physical work will be performed by machines.
What will this workforce do to earn a living since most people still depend on putting in hours?
All we can do is hope that these new technologies will enable people to actually go back to their initial non-labor activities. Caring for family, engaging in community activities and exploring new ways to positively contribute to human society will be our new way of life. We might one day look back and think of how barbaric and inhuman it actually was to spend our valuable time on earth thinking so as to earn our next paycheck.
3. How do machines affect our interactions and behavior?
As we continue to interact with machines we may end up losing ourselves. This is currently evident even though most people are unaware of it. For instance, video games are able to trigger the reward centers in the human brain regardless of the fact that these virtual realities are designed by A/B testing; rudimentary forms of arithmetic optimization that is able to capture your attention. This makes such video games addictive. This proves to be very detrimental to how a society behaves.
However, we can develop different ways to use such kinds of software to direct human attention in a positive direction. If used well, this kind of technology can be used to trigger a society towards a more beneficial behavioral inclination.
4. How do we remove AI bias?
It is very true that AI is more efficient in term of speed and capacity compared to humans. AI can, however, be biased as proved time and again. For instance, Google’s Photos service mainly used to identify people, objects, and places have been reported to go wrong. The AI software used to identify people has missed the mark on racial sensitivity.
Though artificial intelligence is more superior in terms of speed and capacity of processing that’s far beyond that of humans, it cannot always be trusted to be fair and neutral. Google and its parent company Alphabet are one of the leaders when it comes to artificial intelligence, as seen in Google’s Photos service, where AI is used to identify people, objects, and scenes. However, it can go wrong, such as when a camera missed the mark on racial sensitivity, or when some current software algorithms have mislabeled black people as “gorillas” or charged Asian Americans higher rates for SAT coaching.
However, let’s all keep in mind that AI systems are a human creation, and humans have been known to be both bias and judgmental. AI can only become a catalyst for positive change is used well by people to strive for social justice and equality.
5. How can we mitigate AI mistakes?
Once a system passes the training process it then goes to the test phase, where it is tested with more examples to see how it responds.
Even after passing through the training and testing stages the system may still be “fooled” in ways humans cannot. Remember that systems only act according to their input and are not able to learn from another person’s or entities experiences.
If we plan on relying on AI to bring us into a new technologically impacted world, we have to make sure that the machine performs as planned and that individuals can’t override it to use it for their own selfish ends.
6. How do we check on unintended consequences when using AI?
What if artificial intelligence someday turned against us? Terminator aside, let’s imagine an advanced AI system as a "genie” that can fulfill wishes, but with terrible unexpected consequences.
For instance, an AI system may be tasked to eradicate a certain terminal disease in the world. After computing, the AI will, in fact, give out a formula of eradicating the disease by killing everyone suffering from that particular disease.
In actual fact the computer will have provides a working solution very efficiently but not in the way we predicted.
7. How do we keep AI safe from antagonists?
AI software raises very unique and concerning questions even among the elite in the technological sphere. The technological elite has even gone a step further, Elon Musk, for instance recently announced that together with other tech titans have committed $1 billion to an artificial intelligence research center. This highlights just how serious AI security is to our out of concern for what AI may become denotes an important question: Will AI software obey the law of the land and adhere to our ethical standards?
As technology becomes more powerful so should the security protecting it from intrusion. Some AI systems control nuclear weapons and other very machinery such as airplanes and may result in unspeakable damage if used maliciously.
Cybersecurity is very important in these times of technological dependence.
8. How do we express the humane handling of AI?
Although neuroscientists are still working on a way to transfer human consciousness, we currently have a basic idea of issues of reward and aversion. For this reason, we are creating such mechanisms of reward and aversion in AI systems. Reinforcement learning is now being applied to AI systems with a virtual reward.
As new and more efficient technologies are created obsolete technology is being destroyed, should we at some point consider this a form of mass murder?
When we finally consider machines as entities that can feel, act and perceive, we can then consider pondering on their legal status.
9. How do we maintain control over a complex intelligent system?
Human dominance is mainly based on our ingenuity and intelligence and not on our physical attributes.
The question that many people debate on is whether AI will replace us as the new dominant entity. Currently, we rely on “pulling the plug” whenever AI heads in a direction we don’t like. However, what is someday AI becomes sufficiently advanced that it will anticipate this and defend itself? This is what we refer to as the “singularity”, the point in time when we no longer have control over Artificial Intelligence.
10. How transparency should AI Algorithms be?
Most corporations do not allow their AI algorithms to be scrutinized publicly and some algorithms are incomprehensible even to the people who make them. Deep learning though a great way of making predictions does not really explain how and why it made the predictions.
It is true that some algorithms have been used to fire teachers, without any clear explanation as to why it reached that conclusion.
We should be willing to sacrifice accuracy for transparency according to Europe’s new General Data Protection Regulation advices. Let us demand machines to be better at transparency even if we ourselves are not.
11. How do we control the spread of fake News and fake Videos when using AI systems?
New digital spaces have led to innovative journalistic practices that allow novel methods of communication and better global reach than at any point in history. But then again, disinformation and hoaxes that are commonly referred to as “fake news” are increasing and affecting the way people interpret daily developments.
Since screen-time is typically the measure of success in advertising models, bias or inflammatory stories are spread on most social media platforms. This is because humans are more likely to engage with such kinds of content. ML technology use is also a contributing factor as people now are able to create virtual fake videos that are very realistic to the naked eye. Studies have shown that fake news was 70% more likely to be retweeted than real news.
We should put in place serious penalties for the spread of fake news and also use AI software that recognizes and filters out the same. Everyone has a duty to fight the plague of fake news. This varies from supporting investigative journalism, decreasing monetary incentives for fake news, and cultivating digital literacy among the common public.
Artificial Intelligence is as much a new thing in the philosophical frontier as it in the tech world. As tech giants such as Facebook, IBM and Microsoft along with individuals like Elon Musk and the late Stephen Hawking believed that it is the right time to talk about ethics in the nearly infinite landscape of artificial intelligence.
Some ethical questions discussed mainly surrounded issues on mitigating suffering and risk negative outcomes.
However, as we continue to keep AI in check we should keep in mind the positives that have been borne and will continue to be borne through technological progress. Let us strive for the responsible implementation of Artificial intelligence as we enjoy its vast potential.