Will we get human-intelligent AI in this century?
While circumscribed AI may outperform humans in whatever its definite task is, such as playing chess or solving equations, AGI would outperform humans at nearly every cognitive stint.
One common notion is that we’ll get human-intelligent AI this century - but is this the case?
Many of the problems associated with human-level AI are so difficult that they may take decades to resolve. Rather than sentient AI beings, we are more likely to see existing products amended with AI capabilities, much like how Siri was added as a feature to Apple products.
This being said, the long-term goal of many researchers is to create a common AI (AGI or strong AI). While circumscribed AI may outperform humans whatever its definite task is, such as playing chess or solving equations, AGI would outperform humans at nearly every cognitive stint.
A hyperintelligent AI is by definition excellent at attaining its goals, whatever they may be, so we by necessity to ensure that its goals are alined with ours. Using AI and cognitive computation, the ultimate achievement for many is to build a machine to simulate human processes through the ability to expound speech and language – and then respond coherently.
Regarding predicting the future of AI, we can’t use past technological developments as much of a base because we’ve never encountered anything that has the motivation to, knowingly or unwittingly, outsmart us.
In the 1960s, the US Department of Defense took interest in this type of manufacture and began making computers to mimetic basic human reasoning. Flash forward to 2019 and we have yet to develop the type of AI these experiments searched for. However, just because highly intelligent AI won’t happen this century, it’s wise to prepare for the eventuality.
Hyperintelligence in Our Lifetime
Up until recently, the idea that the quest for vigorous AI would ultimately succeed was more imagination than reality. This is still somewhat true, as we discussed before, AI will not be sold as an individual program but an addition to existing software.
However, thanks to recent breakthroughs, many AI milestones have now been reached, making many experts take seriously the possibility of hyperintelligence in our lifetime.
There have been a number of surveys asking AI researchers how many years from now they think we’ll have human-level AI with at least 50% probability. While some experts still guess that human-like AI is centuries away, a majority AI researches guess that it would occur before 2060.
The common limitation of AI is that it learns from data - human-generated data. So, just as an AI algorithm can admonish itself how to win at chess, it can accustom itself twhat product to recommend next online. But this is not human intelligence, this is mathematics.
Besides understanding the very complicated question of "what makes us human," AI developers will also need to improve security features of AI before we even come close to human-level AI.
Whereas it may be a more than a insignificant nuisance if your laptop computer crashes or gets hacked, it becomes all the more important that an AI system does what you want it to do if the task is to direct your car, your airplane, your pacemaker, your machine-driven traffic system or your power grid.
Getting AI to be secure, safe and mass-adopted is the first step. Human cognition comes later.