3 August 2023

AI In The Charitable And Philanthropic Sectors: A Risk Or Opportunity? Neasa Coen writes for IFC Review

Neasa Coen, Counsel in the Charities and Philanthropy team, has authored an article for IFC Review entitled ‘AI In The Charitable And Philanthropic Sectors: A Risk Or Opportunity?’

The article was first published in IFC Review on 03 August 2023. You can view the original article here.

With expanding automation of infrastructure, industry, and the workplace, AI systems are progressively assuming a greater social role. A recent paper by the Alan Turing Institute1 describes the situation well.

“Like steam power or electricity, artificial intelligence is not simply a general purpose technology. It is, more essentially, becoming a gatekeeper technology that holds the key both to the potential for the exponential advancement of human wellbeing and to possibilities for the emergence of significant risks for society’s future. It is, as yet, humankind that must ultimately choose which direction the key will turn.”

AI technology is developing faster than the laws which regulate its use, with questions being raised about ethical issues, risks to society and the ways in which AI should be regulated. This article provides a high-level overview of AI-related issues which will be relevant to the charitable and philanthropic sectors. It will be important for theses sectors to understand and adapt to the opportunities and challenges posed by AI, particularly so as to avoid being 'left behind' as the technology develops.

What is AI?

AI envisages systems that learn for themselves. Marvin Minsky, one of the first scientists to deal with AI, defined it as follows: “Artificial intelligence is the science of making computers do things that require intelligence when done by humans”. The fact that intelligence is required for AI decision-making means that it gives rise to questions around accountability.

Areas under development relate to speech recognition, language translation, visual perception (facial recognition) and complex automation (eg driverless cars). In the legal arena, for example, contract analysis tools have been used to automate due diligence processes in merger/takeover transactions.

AI operates using algorithms and is heavily reliant on data, in particular, accurate data. An algorithm (which takes the form of computer code) comprises a series of instructions which allows a machine to deal with input data and subsequently to generate output data in the form of a decision. Data can be trained in a number of ways:

  • Supervised learning is a process whereby the AI system is trained to produce a certain outcome. This is generally achieved by the system processing available data which has been labelled in a particular way.
  • Unsupervised learning refers to a process whereby the machine can determine patterns from the data and produce its own outcome. It is this latter category which is of greatest significance.
  • Artificial general intelligence is arguably the ultimate aim of those developing future AI systems – if implemented it would allow AI to find a solution to any task that a human being can perform (even unfamiliar tasks) using sensory perception, natural language understanding and creativity.

Depending on the sophistication of the system, it can be difficult to establish how an AI system has made a decision and what factors it has taken into account in order to do so. This is particularly the case where the system uses unsupervised learning. As a result, AI has been described as operating in a ‘black box’.

How Might AI Affect Philanthropy and Charitable Giving?

There are aspects of AI which could be used to streamline grant-making processes both for charities making grants (for example, large foundations), and those applying for grants.

AI language applications allow the creation of text with limited input by a human being. Commentators are of the view that this will allow both the creation and assessment of grant-making applications, with limited human intervention. Charity commentators2 also highlight the possibility of other language systems (for example, transcription and translation services) making it possible for minority or disadvantaged communities to access grants and funding. Increased automation will likely result in the streamlining of philanthropic services, allowing the grant lifecycle to operate with minimal human intervention.

AI is likely to be able to ensure more accurate assessment of the use of charity funds by those who give and receive them. It could be of use in monitoring environmental issues (for example, wildlife in a certain area, water levels or flooding data) thereby showing, by heavy reliance on data, the impact of expenditure and charitable interventions. Automation could also result in financial savings, by avoiding duplication of spending.

Online giving may become more and more influenced by AI, particularly in relation to profiling. Fundraising websites which profile individuals will be able to nudge donors in relation to their giving preferences. While this could enhance the ability to match donors to causes in which they are interested, it could also result in increased funding flowing to well-known causes or to charities with larger fundraising budgets.

GDPR

Automated decisions are decisions made by AI systems without human intervention. They have been historically used in a Financial Services context, for example, in relation to credit scoring and the approval of consumer loans. Their use is becoming more widespread, and they now feature in recruitment and hiring exercises, content filtering and fraud detection. If charities use AI to make automated decisions, it is necessary to do so in compliance with the GDPR.

Currently, under the GDPR, it is possible for automated decision-making to take place in the following circumstances: 1) where the activity is required or authorised by law; 2) where a data subject has provided explicit consent to the processing; and 3) where it is necessary for the performance of or in order to enter into a contract between a data controller and a data subject. Under the GDPR it is possible for data subjects to request a review of decision-making based solely on automated processing. This requires the provision of meaningful information about the logic involved in the decision.

The Data Protection and Digital Information (No. 2) Bill 2023 is currently before the House of Commons. It proposes some changes to automated decision-making against the backdrop of an increase in the use of AI. The Bill seeks to amend the definition of automated decisions to include decisions which have no meaningful human involvement. It will also confer power on the Secretary of State to amend the legislation directly or through secondary legislation with a view to enabling regulation in a fast-moving environment. The Bill is subject to legislative scrutiny and the current provisions may change. That said, charities using automated decision-making technologies will need to consider the impact of future changes.

Collaborative Projects

Charities may find themselves involved in projects (with other charities or commercial organisations) where AI forms a part of the project. For example, a charity might share data about its beneficiaries as part of a research project, or enter into an arrangement with an external recruitment provider who will use AI to analyse job applications and shortlist candidates for interview. When entering into these projects, charity trustees will need to ensure that, in doing so, they act in the best interests of the charity and are furthering the charity’s purposes. This will involve considering the need for, and expected benefits of, the proposed project as well as the associated risks. The Deep Mind case highlights the need for caution in this area. In 2017 the Information Commissioner’s Office found that the Royal Free Hospital failed to comply with data protection law when it transferred sensitive patient data to Deep Mind (a subsidiary of Google) as part of a partnership to create a healthcare app relating to kidney injury.

The roles and responsibilities of all parties in a collaborative project need to be carefully documented. In the case of AI projects, particular attention will need to be given to data protection issues. In broad terms, this will include considering the relevant data and how it is being used, as well as the consents required from data subjects. This in turn will allow the rights and obligations of the parties to be framed in the documentation. Where charities provide data to third party entities, it will be important that there is consent by data subjects to the use of the data in the manner proposed and, if this is contemplated by the project, to develop the data using AI. If valuable data is passed to a third party, the charity will be required to ensure that it is protected and that it is remunerated appropriately.

In the case of AI projects, consideration will also need to be given to intellectual property issues. The most likely rights in these circumstances are database rights, patents and copyright. Intellectual property considerations will also inform ownership of the algorithm, as well as ownership of changes or improvements made to it and the algorithm’s output.

Ethics

The Alan Turing Institute describes AI ethics as a set of values, principles, and techniques that employ widely accepted standards of right and wrong to guide moral conduct in the development and use of AI technologies. It has emerged as an area in response to social harm which might be caused by the misuse, poor design or other negative consequences of AI systems. In light of the public trust held by charities, ethical issues are likely to be of great importance in their use of AI systems. I have summarised below some of the key ethical concerns which might arise from AI systems.

  • Bias and Discrimination: The risk of bias and discrimination can arise in a number of ways. The technology can potentially replicate the preconceptions and biases of the persons who design it. The data itself may also contain bias because the AI may have been trained on data sets which favour particular racial or other characteristics. In addition, the data samples used to train and test the algorithm are not always representative of the populations in which the AI will eventually be used. There has been significant media commentary on this topic.3 For example, predictive policing algorithms (which calculate reoffending risks) can be influenced by data which is tainted by social factors caused by inequality (for example, poverty and racial profiling by law enforcers). Charities will need to be aware of these issues when using AI. In addition, it is likely that the risk of bias and discrimination will lead to increased pressure on charitable organisations working in areas relating to equality and discrimination, or otherwise dealing with marginalised groups.
  • Individual Rights: The way in which the technology has been developed can make it difficult to establish and identify who has legal responsibility for the decision or prediction made, for example, in the case of a driverless car which has an accident. This is likely to give rise to issues for individuals seeking to bring claims. The ‘black box’ element of the decision-making means it is possible that individuals will be adversely affected by decisions which, in a worst-case scenario, are incapable of being explained or shown to be reasonable.
  • Reduced Social Connections: Commentators are concerned that excessive use of AI could reduce communication between people and social cohesion, while increasing isolation and mistrust. Again, this is an area where charities may see increased beneficiary need.

The use of AI within the charitable sector will require thought, planning and consideration, and charities will need to ensure that an appropriate governance framework is in place to ensure its use is non-discriminatory, ethically permissible, and justifiable. In doing so, charities will need to identify the types of AI application they will use, and assess the risks of each application. AI applications that do not impact the lives of people and do not process sensitive personal data are less likely to need proactive management.

Future Regulation

The Department for Science, Innovation and Technology published an AI white paper in March 20234 which provides a framework to guide the UK’s approach to regulating AI. The government has decided against creating a single regulatory function to govern AI, stating that it is a general-purpose technology with applications in many industry sectors. Regulators must consider five key principles when developing regulatory frameworks:

  1. Safety, security and robustness.
  2. Transparency and explainability.
  3. Fairness.
  4. Accountability and governance.
  5. Contestability and redress.

It is likely that UK regulators will be developing guidance to be issued in the coming months.

AI is developing at a faster rate than we could ever have imagined, bringing complex ethical and regulatory challenges. The charitable and philanthropic sectors will need to keep abreast of the latest developments or risk falling behind in a world which becomes increasingly dependent on AI. Against this backdrop, it will be important that charities consider their use of AI, its potential, and the ethical challenges it presents.



  1. Leslie D., Understanding Artificial Intelligence, Ethics and Safety: A guide for the responsible design and implementation of AI systems in the public sector, The Alan Turing Institute, (2019)


  2. Davies R., Would AI be good or bad for philanthropy? Will AI replace grant-makers?, New Philanthropy Capital, (2022)


  3. O’Neil C., Weapons of Math Destruction, Penguin Books, (2016)


  4. Policy paper, A pro-innovation approach to AI regulation, (2023)

Our Insights

Forsters LLP’s "lawyers are easily as good and capable as Magic Circle advisers".
The Legal 500 UK
×