In a recent article, titled “Ethics education the key to AI future for Hitachi”, the author outlined the opinion of the Japanese manufacturer Hitachi that “computers will never surpass human intelligence, but dangers exist”.
Why is it that acclaimed futurists such as Ray Kurzweil believe that AI will exceed human capability by around 2045, yet others hold this to be “impossible’?
We are currently experiencing the third, and most critical AI boom since the 1960’s. The difference today is that we now have what previously booms lacked – computing power, and the insight from neuroscience technology advancements.
Hitachi recently launched its AI system “H,” for analysing big data. This provides more credence that its beliefs around AI are not for its own agenda. On the contrary, it firmly believes that singularity, the point where human intelligence will be equally enhanced by artificial cognitive technologies, “cannot happen.” It does concede that AI can certainly surpass humans in terms of knowledge processing but will continue to lack #theHumanFactor – where AI will have a will of its own.
The advancement of AI depends largely on human development of AI algorithms. Whilst machine learning will account for increasing proportion of algorithmic development, it still relies heavily on humans. Rather than separate humans and AI robots, the company believes that AI has the potential to establish a more “ambient society,” one in which robots are integrated, and accepted, into every aspect of our lives.
However, one glaring issue that is receiving only superficial attention is the area of ethics. Since humans are largely responsible for the programming of robots, and AI, who decides on decision algorithms that have a moral aspect. With wide variances in cultural interpretations as to what is acceptable, or not, ethics is one area where AI needs a lot more scrutiny. Akin to ethics, the other area that needs very careful analysis and global agreement is the definition of what will continue to be human, and what is robotic. What is ‘real’.
Already we are witnessing generation gaps developed by technology, rather than chronology, so how will we mix cyborg and humanoid forms together. I doubt it will be smooth, natural or seamless. These is likely to be open debate – my concern is that these issues should be debated now, so that frameworks of principles can be incorporated into development now, before we progress to the point of ethical collision. Whilst I can understand a real concern not to impede current development, we must remember that in spite of the ‘dream’ of living in an ideal world where one nation intends no harm to another, we have enough evidence to recognise that this is not a reality.
Terrorism will become less about physical acts of violence, and more about technological disruptions to economies and cultures. Are we equipped to be dealing with all the scenarios that will arise?
Hitachi believe that education is the key – placing high importance on educating its engineers on the ethical questions they will face as AI continues to advance. It is tapping into the wisdom of retired engineers to ensure that principled design concepts are an integral part of development, and are not cast aside in the thrill of developing the next big leap in AI design. At present AI is focused more around repetitive, transactional processing requiring little in terms of moral judgement. However, as it becomes more integrated into our lives, it will be required to take on more and more human decision making. This is where the consequences of ill-conceived AI moral coding could have detrimental and even dangerous impact.
There is no doubt that AI promises many benefits for humans – the only doubt is whether humans are ready for AI?