Over the centuries there has been a struggle for basic human rights for everyone, now enshrined in the The Universal Declaration of Human Rights. These are recognised by all except the most extreme Libertarians and criminals. More recent and still very much under debate is the fight for animal rights, which follows the discovery that even quite simple lifeforms can feel pain and many higher animals can have recognisable emotions. Much more problematical, and with discussion only just beginning, is the question of whether machines can ever have feelings and hence whether their rights should be protected.
AIs (Artificial Intelligences) that learn are commonplace; it is no longer true to say that computers can only do what humans have programmed into them. These AIs outperform humans in many ways that would have been thought impossible not so very long ago. There is talk of computers more powerful than human brains, uploading the complete contents of a human brain into a computer, head transplants and interfacing a human to a computer using thought alone. The distinction between human and machine through measuring intelligence is already becoming difficult to determine and we have to look at much more sophisticated tests than the basic Turing Test if we want to accurately distinguish between them. John Searle with his Chinese Room Experiment and in other writings has firmly made the case that computers can never be conscious, they can only simulate consciousness; others such as Daniel Dennett disagree. All accept that, at the present time, we do not understand what consciousness is, although the physical human brain itself is beginning to give up its secrets. It is difficult to accept that computers, whose power and abilities are increasing exponentially, and which already learn from and react with their environment in ways that are impossible to completely determine, can never exhibit an attribute which we do not understand. It seems that in the not too distant future we will interact with machines which exhibit all the characteristics of being conscious but whose actual conscious nature we cannot determine.
There is an additional dilemma. We can already use stem cells to repair or replace damaged brain cells; it is not too great a leap to growing something that is physically like a complete brain. Many like Ray Kurzweil see a future where humans and machines grow together. Not only will artificial organs and limbs be used to repair, replace and improve the current delicate biological ones but implants connecting directly into the brain place the entire internet and vast amounts of processing power directly as part of our mind. It is not completely beyond our imagination to envisage a human, most of whose biological parts have been replaced by machinery and virtually the only remaining tissue is a brain which is only a part of the “mind” of the being. What would be the difference between that and a machine, constructed with part of its brain composed of human-like brain cells that were grown in a factory? Should we regard the one as fully human with all the rights that that entails and the other a mere machine with no rights which can be shut down on a whim?
It may seem that it is very early in the development of AI to be thinking in these terms, but the exponential nature of technological development means that the time will be upon us well before many of us realise it. We are already deploying robots in situations which would be too dangerous for humans or, in the case of space exploration for example, where there is no hope of return, because they are expendable. We are imagining, even looking forward to, the use of robots as soldiers, domestic servants and as sex toys; all the time doing everything we can to make them autonomous which means that they will be able to make their own decisions and perform their function without direct human control. In this respect these robots are being treated in very much the same way as slaves in the past (and indeed still in the present as well.) It took many centuries and many lives in the struggle to recognise that all humans had some basic rights that should be upheld by law; it is better to recognise and deal with the potential problems now rather than face similar problems again.
There are many who fear the rise of machine intelligence seeing it as the beginning of the end for the human race which will gradually become superfluous. This raises the important question as to whether it might be best to try and put a stop to it now, to outlaw all research into machine intelligence and consciousness so as to cement forever the supremacy of the living human mind. This in itself raises many ethical and practical problems. If we as the human race have the potential to create a new sentient species, should we not do so; is deliberately failing to do it a form of specicide. Surely if the result of creating a new species turns out to be that it becomes dominant and we decline and disappear, then should we not accept that this is as it should be, that it would only happen if they were better suited to living in and understanding the universe than we are. It is, of course, highly unlikely that a race as expansionary and warlike as the human race would meekly accept such an end, so a struggle for supremacy would be very likely unless we take the path of harmonious living together and gradual merging of human and machine. This still would involve many problems with a race so large and diverse as the human race. Universal acceptance of the best course to follow for the race as a whole is not something that has ever been achieved so far.
In practice any attempt to ban research into advanced machine intelligence is certain to fail. The genie is quite definitely out of the bottle already. AI is so useful and cost effective in so many different circumstances that the basic laws of supply and demand on which our capitalist system is based would mitigate against any attempt to restrict its development. Implementation of any such laws would also be virtually impossible since it would have to be complete and universal. Again, previous attempts to ban or restrict technologies, such as research into human embryos and cloning, have been mainly unsuccessful, although there is certainly an advantage in expressing the opposition of the majority to such research even if it continues underground. In the same way, if we accept the possibility that AIs may be given or develop consciousness then we should be beginning to formulate a regulatory regime such that the first artificial minds that are deemed to be conscious will have been decently treated.
Yet another problem now arises if we do accept the possibility of conscious machines. Given that, initially at least, they will have been constructed by humans, and their education will be provided by humans, will they have an understanding of ethics and if so will their concept remain the same as ours as they develop separately. Given that debates on ethics and morality have been going on since the beginning of human existence, with Socrates still considered as a major source for modern thought, it is perhaps unlikely that our own understanding of ethics is sufficient for it to be agreed and codified for incorporation in a machine. It might be best for the AIs, and perhaps also for us, if they be left to develop their own theories of how societies should be organised and how sentient beings should behave towards each other, a practical experiment for John Rawls’ Veil of Ignorance. This would be an act of faith that such a society would develop ethically and include humans as equals.
Let us return to the situation as it is now when we consider whether machines, computers, robots or AIs can ever be considered as intelligent, conscious, sentient beings and therefore should enjoy those same rights that we currently give, or should give, to humans. We are considering entities whose capacities are, in general, below ours, but increasing exponentially with no foreseeable limit. The qualities that we are considering have been much debated but are still poorly understood; we do not understand what they are nor how they arose. We may not even be able to tell whether the machines actually have developed those qualities or whether they are just simulating them; does this make a difference? If we concede the possibility that machines might become conscious, or just that for our own sakes we should consider them as such even if it might technically be a simulation, then there are some deep philosophical, sociological and political questions that must be urgently considered. These range from deepening and extending our own concepts of ethics so that they are transferable to machines to allowing them space to develop their own systems, which hopefully will merge with our own. It would be a pity if our worst fears were realised and the first conscious intellects of a new species were inimical to humanity because their forebears had been treated as mere machines.