The UK’s national security and law enforcement agencies help to keep our people and institutions safe from harm.

It’s aways been a difficult job, but in a world where technology is changing rapidly, it is a more challenging task than ever. To keep pace, national security and law enforcement agencies are increasingly harnessing the power of artificial intelligence to create automated analytics, which analyse data and provide insight to help reduce threats to public safety.

Those automated analytics rely partially on data gathered from electronic monitoring and surveillance, which involves a degree of intrusion into some people’s private lives.

Balancing the goals of law enforcement and national security with individuals’ right to privacy is foundational to a liberal, democratic society. In the UK, that balance is regulated by the Investigatory Powers Act 2016, which specifies that agencies consider two criteria: necessity and proportionality.

The principle of proportionality is a familiar one: we use it every day to determine our behaviour by balancing the risk and reward for our actions. It’s an instinct connected to what we judge to be fair and just.

However, the stakes in the national security context are high – literally life and death. A question that national security and law enforcement are faced with ever-more-regularly is the extent to which automated analytics and AI change the nature of proportionality and how we assess it.

On one hand, we have obligations of national security and law enforcement agencies to keep citizens safe in a challenging operational environment.

On the other hand, as digital information on individuals becomes more widely available both from what they choose to share online and personal data that is collected by services, institutions, and surveillance, the capacity to intrude on individuals’ lives is becoming greater.

These high stakes call for correspondingly high standards of clarity and accountability.

I am the co-author of a new report from The Alan Turing Institute, the UK’s national institute for data science and artificial intelligence, which examines how to balance the needs of national security with individual’s human rights.

The report offers a new structured framework to help better understand and assess the level of privacy intrusion when AI analytics are used.

To build that framework, we held focus groups and conducted interviews to gather opinions and feedback from stakeholders across the UK government, national security and law enforcement, and legal experts outside government.

The framework focuses on six key factors that will help individuals and organisations assess the risk of how automated analytics are impacting privacy intrusion. These are: the datasets; the results; the role of human inspection and decision making; tool design; data management; and urgency, timeliness and resources.

Our hope is that the framework could be integrated into existing authorisation and compliance processes, offering another guarantee of privacy against overreach into our private lives. Careful consideration of our six factors will help to provide assurances that automated analytics are performing in accordance with the letter and the spirit of existing regulation.

Artificial intelligence is set to become an increasingly integral part of our lives, our jobs and our institutions. It’s vital that reports like these are part of the ongoing dialogue about how to use AI as a tool that works for us, that helps keep us safe, and also allows us control over our privacy.

Professor Dame Muffy Calder is head of Glasgow University’s College of Science and Engineering and professor of formal methods at the School of Computing Science.