It’s been interesting to watch 2023 become the year where a topic goes from niche subject matter to compelling consideration, and a phrase coined sixty-seven years ago moves from science fiction staple to become a regular, serious business conversation.

Having bumped along as a well-recognised if less well understood terminology, this year has witnessed the explosive growth of Artificial Intelligence (AI) driven by several overlapping factors, but two stand out: the exponential rise of machine learning algorithms, and the availability (and capacity) for massive amounts of data, this second a double-edged sword that we will return to.

The advantages and efficiency gains of AI ensure that this is very much a genie that’s out of the bottle, and not for going back in, so we had better learn to manage it.  It has been a subtle part of our lives for decades, but now its ability to search, collate, reduce errors, and carry out repetitive tasks has been democratised, available to all with an internet connection and a willingness to try it out.

READ MORE: Scottish skills landscape review misses mark

With great power always comes great responsibility, and most of us will be aware that AI comes with some distinct health warnings alongside its clear benefits, and the area Scottish Engineering has recently been opening a conversation with our industry about has in fact been the area of employment law.

It’s a topic that has always been key to our support to industry, formed as a member organisation in 1865 recognising that employers needed guidance and support to understand and comply with new employment legislation being added in step with the growth of industrialisation. That guidance remains as relevant today, and it is perhaps no surprise that it has been our nearly qualified trainee solicitor Amie Trainor who has immersed herself in understanding AI’s potential impact in this area, and having done the hard work she has been generously sharing it with us.

From those regularly working with AI, the golden rules seem to distill down to firstly be very sure that the question you are asking is the correct one; secondly be aware that the data available to the engine may introduce bias in its conclusions or results; understand that any data input to the process could now be in the public domain, and act accordingly; and finally, always add the human intelligence check to verify the absence of errors before using the output.

For the absence of errors, a suitable example arises from the US where the lawyer for a man suing an airline in a personal injury claim used AI to prepare a court submission. The lawyer assumed that the cases quoted were real and so presented these to the court, but the AI engine had in fact fabricated some of the cases.

READ MORE: Scotland must seize huge manufacturing opportunity

In later submission the lawyer stated that he “did not understand it was not a search engine, but a generative language-processing tool” prompting a judge to consider sanctions, whilst marking this as one of the first cases of AI “hallucinations” to make it to court.

A more sobering example is working its way through the UK employment tribunal system at the present time, involving a former employee of Uber who raised a claim when he was prevented from earning and his account eventually deactivated as a result of the facial recognition system (that Uber used to ensure only a licenced driver was driving under its booking system) failing to recognise him as he attempted to sign in to work.

The claim lays out a case that the facial recognition system was known to have inherent racial bias because not only was the claimant misidentified by the technology, but he was profiled for heightened and excessive checks because of his ethnicity. Uber applied to the employment tribunal to have the claim struck out last July, but this request was denied, and it awaits a full tribunal hearing.

To return to the earlier theme, these cautionary tales underline a fundamental guidance that has never been only specific to AI. Any analysis that seeks data or insight on which important ethical decisions will be made needs to be checked with the final content solely the ownership of the human author, and never abdicated to a software engine no matter how clever it appears. Carefully managed deployment in engineering will bring quality and productivity gains alongside rapid solutions to complex problems, and we, like any other competitive country, should strive to attain a leadership position.

READ MORE: Huge challenges from UK’s lack of energy strategy

The advantages of AI are real, well documented, and compelling, and so no surprise that its global revenues are forecast to increase tenfold by the end of this decade, and that returns me to another question of conscience when we consider what limits we should choose to place for this technology.

I mentioned that one of the key enablers to the lift-off of AI is the availability of massive amounts of data, and the other side of that equation is that AI is itself able to generate amplified amounts of data every time it is asked a question, whether trivial or not. Data doesn’t come without cost, existing in data centres which are energy hungry, with that energy currently dependent on fossil fuels. If you haven’t yet looked at the carbon footprint of the internet, have a look but do brace yourself, it’s a chastening experience.

Paul Sheerin is chief executive of Scottish Engineering.