Franklin Templeton is a global investment firm whose areas of expertise include ethical investing that incorporates Environmental, Social, and Governance (ESG) factors into investment decisions. Including ESG factors alongside traditional analysis is not only an opportunity to provide clients with confidence that their values align with the assets they invest in, it is also an investment model that can outperform traditional strategies. KPMG reports that as of spring 2019 ESG investments comprise almost 25% of all managed investment products. Traditionally, ESG analysis is conducted by specialized firms that monitor companies and assets and provide quantitative measures of their ESG performance. Franklin Templeton has been researching new AI tools that use real-time and historical data to algorithmically produce up-to-the-minute ESG analysis. Alongside their evaluation of the legal and risk management questions, they were interested to learn more about potential ethical concerns, and they asked Ethical Intelligence to research this.
There is significant overlap between the challenges of using AI safely and those of using AI ethically, and the development of an ethics strategy can be an important part of a broader risk management effort. We helped Franklin Templeton examine their proposed use of AI and uncover the core ethical concerns driving the developing regulatory environment. We identified several key areas where specific ethical protocols tracked the evolution of legislation and industry best-practices, and which allowed Franklin Templeton to adopt a posture of pre-emptive compliance to mitigate reputational and operational risks.
One of the key principles of ethical AI, and an area that is increasingly the subject of regulation, is explainability. This often involves cases where there could be a need to ensure individuals can understand decisions made by machines that might affect them. With Franklin Templeton, we also considered the requirements for explanations between clients and vendors, and between professionals and the AI tools they rely on, for example, to ensure AI systems represent their goals in ways that are consistent with the human purposes they are employed for. Ethically, there is a problem if it should turn out that an AI ESG monitoring system was biased in favour of particular sources or styles of information, or particular kinds of business practices, that turn out to lack real causal correlation with ESG values. Failure here could be ethical, in that ESG scoring was inaccurate, but also financial, if the inaccuracy leads to unexpected changes in asset value.
We worked with Franklin Templeton to produce a concrete assessment protocol they could incorporate into their evaluation of potential AI ESG monitoring systems, and a companion document that identified seven areas where AI ethics and safety concerns applied in this domain in ways that had significant potential impact that merited close attention. Together we came to the realization that what we were building wasn't so much a series of answers to specific questions as it was a living document that could evolve alongside their adoption of AI, and the articulation of their core ethical values in this new domain. We helped them discover how their values could inform the ethical use of AI, some of the specific issues that would be important for them to address and the areas of expertise they could incorporate into an ongoing effort to use AI ethically and safely.