AI and Healthcare in Africa
Artificial intelligence in healthcare: Proposals for policy development in South Africa
Despite the tremendous promise offered by artificial intelligence (AI) for healthcare in South Africa, existing policy frameworks are inadequate for encouraging innovation in this field. Practical, concrete and solution-driven policy recommendations are needed to encourage the creation and use of AI systems. This article considers five distinct problematic issues which call for policy development: (i) outdated legislation; (ii) data and algorithmic bias; (iii) the impact on the healthcare workforce; (iv) the imposition of liability dilemma; and (v) a lack of innovation and development of AI systems for healthcare in South Africa. The adoption of a national policy framework that addresses these issues directly is imperative to ensure the uptake of AI development and deployment for healthcare in a safe, responsible and regulated manner
Authors’ affiliations
S Naidoo, D Bottomley, M Naidoo, D Donnelly, D W Thaldar
First Do No Harm: Legal Principles Regulating the Future of Artificial Intelligence in Health Care in South Africa
What sets AI systems and AI-powered medical robots apart from all other forms of advanced medical technology is their ability to operate at least to some degree autonomously from the human health care practitioner and to use machine-learning to generate new, often unforeseen, analysis and predictions. This poses challenges under the current framework of laws, regulations, and ethical guidelines applicable to health care in South Africa. The article outlines these challenges and sets out guiding principles for a normative framework to regulate the use of AI in health care. The article examines three key areas for legal reform in relation to AI in health care. First, it proposes that the regulatory framework for the oversight of software as a medical device needs to be updated to develop frameworks for adequately regulating the use of such new technologies. Secondly, it argues that the present HPCSA guidelines for health care practitioners in South Africa adopt an unduly restrictive approach centred in the outmoded semantics of telemedicine. This may discourage technological innovation that could improve access to health care for all, and as such the guidelines are inconsistent with the national digital health strategy. Thirdly, it examines the common law principles of fault-based liability for medical negligence, which could prove inadequate to provide patients and users of new technologies with redress for harm where fault cannot clearly be attributed to the healthcare practitioner. It argues that consideration should be given to developing a statutory scheme for strict liability, together with mandatory insurance, and appropriate reform of product liability pertaining to technology developers and manufacturers. These legal reforms should not be undertaken without also developing a coherent, human-rights centred policy framework for the ethical use of AI, robotics, and related technologies in health care in South Africa.
Author
Dusty-Lee Donnelly
Team Profile
Prof. Donrich Thaldar
Shiniel Naidoo
Beverley Townsend
Meshandren Naidoo
Dusty-Lee Donnelly
Organisation: School of Law, University of KwaZulu-Natal
Project Title: Artificial Intelligence in Healthcare in South Africa
Country: South Africa
Principle Investigator: Prof. Donrich Thaldar
Project Description:
The objective of this research was two-fold: first, to assess and clearly delineate the AI normative framework within SA and consider how this aligns with human rights, ethics and international best practice; and secondly, to suggest how any insufficient alignment might be remedied.