Developing an ethical future for AI
“The best way to ensure an ethical future for AI is to invent it together” – Dario Amodei, research scientist, OpenAI
Despite the widespread use of artificial intelligence (AI) across a range of industries and services no country has established, as yet, an ethical or legal framework to govern its use and development. Although a number are working on this as our previous blog stated.
AI is still relatively embryonic, but the pace of change is headlong and the investment considerable. Global spending on AI and cognitive systems is expected to grow to $19.1 billion in 2018, an increase of 54.3% over the amount spend in 2017[1]. Global research firm Gartner predicts that AI will create 2.3 million new jobs worldwide by 2020, while at the same time eliminating 1.8 million roles in the workplace. Business and society need to manage this transition.
Potential to transform
AI undoubtedly has the potential to transform economies and benefit humanity, but to do this successfully it will need to be both predictable and trusted. As AI evolves and becomes more sophisticated, it will make or assist in making decisions that have a far-reaching impact on individual lives. The more automated those decisions become, the greater the ethical challenges posed. Businesses and those at the vanguard of AI deployment cannot afford to wait for regulators to set the bar and define the ethical or regulatory framework.
In June, Google published seven principles for the development of AI within the company. Interestingly, this was not a senior management initiative, but came from the company’s programmers. Their action, possibly prompted by a memo from Google Cloud’s chief scientist, is believed to have been in response to Google’s first major AI contract with the Pentagon for a military contract. According to reports, the memo advised against any mention or implication that AI was being used, for fear that media exposure of this could be damaging to the company.
Developing the right principles
Google’s programmers are not alone. A number of institutions and industry bodies have published overarching principles that should govern the development of AI technologies and autonomous systems including the Institute of Electrical and Electronics Engineers, the Institute of Business Ethics and, approved only last month, the Association for Computing Machinery.
Common factors in all these stated principles is that the development of AI should be socially beneficial, should not cause harm or contravene human rights and should be accountable to people.
Developing a set of principles around the use of AI or cognitive systems is a good place for a company to start. Businesses should also ensure that their use of AI aligns with company values and is in accordance with their code of conduct.
AI will only fulfil its potential to be a force for good if it is built around an ethical code. Ethical thinking, therefore, needs to be applied from the outset informing the development, programming and application of AI. Businesses also need to consider privacy issues and data protection to ensure that their systems accord with the highest standards of privacy ‘by design‘. A system of monitoring and oversight will be needed to ensure that the technology is working as intended, that any risks have been identified and that appropriate checks and mitigation are in place.
Key ethics considerations
Responsibility: Businesses must take responsibility for the actions and impact of the technologies they use. Robots, algorithms and other cognitive systems cannot be held accountable for any failures or harm caused, so businesses must make it clear where accountability lies within their organisations and how this is managed. In other words, businesses using AI are responsible for ensuring that the technology is designed to operate in a way that does no harm and that it continues to operate in the way it was designed.
Shared benefit and prosperity: There should be a demonstrable benefit to the use of AI including evidence of new opportunities created. Business and governments will need to work together to manage the impact of AI on the workforce, providing assistance, such as retraining, where possible.
Fair: AI systems must be free from bias, prejudice or discrimination. They need to be seen to produce consistent, accurate, reliable and precise results.
Transparency: Where there are system failures, there should be a commitment to report and explain. Organisations should understand their AI systems and be able to explain how they are being used and be accountable for the actions or decisions they take.
Trust: For AI to succeed it must be trusted. Systems must be operationally safe and secure. They should not be used to cause harm or compromise other recognised human rights and freedoms. They must be trusted and understood.
Conclusion
AI may be cutting edge, but it cannot be intrinsically moral or ethical. That is entirely dependent on how it is developed, programmed, monitored and applied. For business and society to obtain the maximum benefit from these scientific advances, businesses must recognise that they are responsible and accountable for the ethical development and application of these technologies. Â
[1] International Data Corporation, March 2018 – https://www.idc.com/getdoc.jsp?containerId=prUS43662418Â (third party link no longer available)