Artificial Intelligence is all over the news, and it’s dividing opinion. It's seen either as something with the power to enrich lives, or as the eventual cause of the apocalypse that will steal our jobs in the meantime.

When “the godfather of AI” Dr Geoffrey Hinton recently quit Google, he warned of the dangers posed by the technology and described AI tools – which he feels may soon be more intelligent than humans – as “quite scary”.

But the fearmongering doesn’t stop there.

“AI could kill off the human race!” screamed headlines earlier this year, following testimony by an Oxford professor to the UK Government’s Science and Technology Committee. 

A recent open letter, signed by dozens of senior tech leaders, also called for a pause on the development of AI models, until robust safeguards are put in place.

All this hype has heightened public awareness of AI like never before. Suddenly, everyone’s talking about it, often with concern.

But do these stories highlight a genuine cause for panic, or do we all just need to calm down?

Well, if you look beyond the headlines, you’ll hear Dr Hinton say that in the shorter term, he believes AI will deliver many more benefits than risks. But he’s quick to add that it’s the responsibility of governments to ensure it is developed "with a lot of thought into how to stop it going rogue". 

And to its credit, the UK Government recently announced £100 million of investment in a new AI taskforce, whose responsibilities include the development of safe and reliable AI models. This is also a focus for the Scottish Government through the establishment of the Scottish AI Alliance and AI strategy.

Bill Gates recently called AI the most important technological advance in decades, as fundamental as the creation of the internet or mobile phone.

And I have no doubt that AI has the potential to automate tedious tasks, at work and at home – making our lives easier, more efficient, and perhaps more enjoyable. There are also exciting possibilities in healthcare, where AI can quickly crunch data, speeding up medical diagnoses, drug discovery and development.

But, as with any emerging technology, there is always the potential for unintended consequences; in the words of technology ethicist Stephanie Hare, “when you invent the ship, you also invent the shipwreck”.

To avoid sinking, we must educate technologists and users on maximising benefits and minimising harms of AI. If we consider potential risks in how AI is designed and used, we can actively work to diminish them.

Ultimately, we can’t predict the future of AI, but we can create responsible and ethical legal frameworks to protect its reputation. This is something the UK government is conscious of and seeking to tackle.

But legislation is historically slow to follow innovation, so researchers, engineers, governments, organisations, and individuals must work together to promote responsible AI, to ensure its impact on society is positive.

I’m confident mankind is safe for now, and being outsmarted by AI is certainly not inevitable. But if we believe in a world where AI adds value to our economy and society, it is everyone’s responsibility to step up, engage in debate, debunk the hype, and shape the future.

Brian Hills is CEO, The Data Lab