The University of Edinburgh Information Services Group (ISG) has begun to explore the potential for leveraging learning analytics data to support student wellness initiatives. There is growing awareness that student mental ill-health is a serious problem in higher education institutions, and there is broad concern that existing support infrastructure can not meet the real demand. Simon Chapple, Head of Technology ISG at the University of Edinburgh, has been leading an exploratory effort to analyze existing datasets and identify opportunities to proactively offer support to students at risk. Such an initiative would allow for more efficient usage of existing resources, as well as more generally allowing the University to add another tool in support of its commitment to the success and wellbeing of its students.

Ethical Intelligence was approached to help explore the moral and ethical implications of surfacing mental health signals from the aggregate learning analytics data the university holds, and to help develop an ethical framework that could guide the development of this project. We recommended that the project be examined in phases. Firstly, clarity is needed regarding the moral and ethical imperatives of the project. Secondly, it is necessary to understand the legal relations between stakeholders, such as between the student and the university, and the regulatory requirements that apply when handling sensitive data, especially data that is being processed in a way that yields health information. As the project reaches implementation stages, there is a third phase where there are important ethical considerations regarding the data lifecycle and the nature of the algorithmic processing that takes place. There are well-understood artificial intelligence and machine learning safety risks that are important to consider when processing data in the context of digital epidemiology. Finally there is the fourth phase, when data processing indicates cases where further intervention is warranted. How should such follow-up be conducted?

The research we conducted for the University of Edinburgh ISG discussed the specific ethical issues and challenges emerging in each of these four phases, and described approaches that the university could consider to address them. We concluded that there is reason for optimism that the university could meet the challenges present in the first three phases, and that ISG could adopt an ethical protocol that would help manage the ethical risks and AI safety challenges. It was the fourth phase where we agreed that more work would be needed in order to act appropriately on the information the analytics project could surface. Developing an ethically responsible and effective strategy to engage with students that the system may flag as potentially benefiting from enhanced mental health support services is a very serious challenge. Once you develop the capacity to discover signals that may indicate potential risks to well-being latent in other data-sets, there is an ethical imperative that an appropriate method of effective action is put in place. This is a general concern for many of the opportunities that data-science and AI provides, that we are able to make effective and ethical use of new capacities these tools confer. In this particular context, given the gravity of the subject, the sensitive nature of the data, and the special legal and ethical relationship between students and educational institutions, we identified the engagement process as a major concern which requires further study. This was an interesting projects for all of those involved, where we discovered that one of the primary ethical requirements in this particular application of data science and AI was in fact downstream of the actual processing, and yet was fundamental to the overall ethical dimension of the project as a whole.