Bill Gates is now warning us about Artificial Intelligence (AI). In recent news he has said that we should be concerned about the risk of creating AI. This contradicts a Microsoft Research Chief.
This is followed after an interview with Stephen Hawking gave similar warnings that predict an almost Hollywood blockbuster picture. He backs up the principle that AI would likely turn against humanity once they are self sustaining.
AI has came a long way and is almost to the level of 90s film fantasies. AI humanoid forms could potentially be in public places within the next 10-15 years at the current rate of technological advances.
The key thing we need to think about is what do we class as being a risky AI? In essence self driving cares is a form of AI, predictive text which is on most phones is form of AI. However, we don't tend to see these as the type of AI that will kill all of humanity. We relate the risky AI to humanoid form, which is still many years away. One day we will probably have AI to the same level as in 'I Robot', but this is at least 15-20 years away before we start seeing in everyday life.
AI is a great thing for the world as it can help us be more productive and take us into a whole new age. But the risk factor is that once we make AI smart enough they could easily disobey us. The day could possibly come when we rely on them, but they don't rely on us. They could then easily rebel against us.
The idea of AI rebelling against humanity is generally because most people see AI to be used to replace people doing jobs we don't want to be doing, such as hard labour jobs. In basic terms, AI will be slaves to humanity.
So, if they are slaves, can we really blame them if they was to rebel and go against us?