IT is 2123 and you are being rushed into theatre to have a swollen appendix removed.

The surgeon waiting for you in the operating room has downloaded all the diagnostic files relating to your case, and will perform the procedure with micrometre precision.

The risk of error or infection is reduced, incisions minimised, and recovery time accelerated.

Except the person carrying out the operation will not be human, but a super-intelligent robot specially-programmed to carry out emergency abdominal surgery.

This may sound like science fiction, but for those working at the cutting edge of artificial intelligence and robotics it is seen as one probable future for healthcare.

"I would expect that yes, at some point in the future, that's exactly the kind of thing one should expect to see," said Subramanian Ramamoorthy, a professor and personal chair of robot learning and autonomy at Edinburgh University's School of Informatics.

"Predictions are always hard, but technically, that would be the natural evolution. It'll be in steps, from smaller procedures to bigger procedures.

"The introduction is going to be slow. We'll start with existing, minimally-invasive surgery and giving people one step of assistance.

"Then, once people start to trust that, you go one step further into task-led robotic surgery: you tell me what bit you want excised - a polyp, for example - and then that gets automated.

"Then, one day in future - it's hard to predict when - we can imagine that you could have an entirety of surgeries, but we're not there yet. In commercial terms, we're not even at the beginning of this."

The Herald: Current 'robotic surgery' used in the NHS, such as Da Vinci robots, are still operated and guided by a human surgeonCurrent 'robotic surgery' used in the NHS, such as Da Vinci robots, are still operated and guided by a human surgeon (Image: PA)

READ MORE: NHS Glasgow in 'world first' AI trial for COPD patients 

Ramamoorthy is among those on the frontline of artificial intelligence (AI) in medicine.

At a specially-created lab at the Bayes Centre in Edinburgh, which mimics a typical operating theatre environment, he has been pioneering the development of sensor-guided autonomous robots that can help cancer surgeons "push towards tighter margins" - meaning that less healthy tissue is removed and recovery rates improve.

This work on safe AI for surgical assistance builds on ideas Ramamoorthy first explored through research into self-driving vehicles. 

He sees parallels between the incremental progress in autonomous driving technology - from parking assist to eventual driverless cars - and the step-by-step advances in healthcare from surgeon-guided robots (already a reality) to the autonomous robot surgeons of the future.

"In the beginning everyone hypes it and is a bit disappointed, and then you get gradual growth," said Ramamoorthy.

"It's exactly the same thing here. To the insiders the hype was not justified; likewise, the feeling that some people have that 'it's not going to happen' is also unjustified, because it was always going to be a long game."

The Herald: There have been calls recently for AI development to be paused for six-months amid fears it is unsafeThere have been calls recently for AI development to be paused for six-months amid fears it is unsafe (Image: Edinburgh University)

When it comes to diagnostics, AI is already finding a foothold in the NHS.

A successful study in Grampian used AI as a "second pair of eyes" to scan 80,000 mammograms for signs of breast cancer. 

It is also being trialled in Glasgow to alert clinicians to COPD patients most at risk of emergency hospital admissions so that pre-emptive interventions can be taken instead.

When it comes to robots performing surgery, however, Ramamoorthy says it is a bit like transitioning from map-reading to GPS.

He said: "At the moment a surgeon in the room outside looks at the imaging, keeps it in their head, and then walks in and performs the surgery based on what they can see.

"It's a bit like the old-fashioned way of steering a ship after having looked at the map somewhere else, whereas what we are talking about is more like GPS-driven navigation.

"What we're looking for here are real-time diagnostics giving the robots that micrometre level of accuracy.

"The issue of staffing shortages in many ways is secondary - not because it's not important - but for a long time they're not going to get rid of people because people are still going to be sitting there monitoring it, and supervising it.

"In the beginning, accuracy will be the driver."

READ MORE: We must heed the danger of the rise in artificial intelligence 

Ramamoorthy will be discussing the latest developments during a talk at the Bayes Centre on Thursday.

The 'Games Robots Play' event is part of a week-long series of discussions on artificial intelligence taking place as part of the Edinburgh Science Festival.

It comes days after AI experts including Twitter billionaire, Elon Musk, and Apple co-founder, Steve Wozniak, called for a worldwide pause in the training of human-competitive intelligence technologies, warning that they "pose profound risks to society and humanity".

It follows the release on March 14 of GPT-4, the next generation of the deep learning language model behind chatbot, ChatGPT.

While Musk and Wozniak caution that no one "can understand, predict, or reliably control" these emerging innovations, others have compared a moratorium on AI development to "[delaying] the Manhattan Project and letting the Nazis catch up" - a reference to nuclear weapons.

 

 

To the layman, all this seems worryingly reminiscent of HAL 9000 - the rogue computer in '2001: A Space Odyssey' - or Skynet, the fictional artificial intelligence system in the 'Terminator' franchise which, on becoming self-aware, triggered global nuclear warfare before its human inventors could shut it down.

The late theoretical physicist, Professor Stephen Hawking, once warned that it was impossible to foresee whether humanity would be "infinitely helped, ignored, or conceivably destroyed" by AI.

"Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilisation," he told a technology summit in Lisbon in 2017.

The question of whether sentient robots are "friends or monsters" will be discussed by a panel of guests at the Edinburgh Science Festival on Tuesday, moderated by Professor Michael Herrmann of the Edinburgh Centre for Robotics.

He said: "In the past there wasn't really a question of whether robots or machines can be sentient, but now there is a feeling that something has changed - a new quality has been reached - so we need to ask these questions again."

The concept of sentience in robots throws up a raft of dilemmas, from the ethical to the existential: should robots have rights, for instance, and if we can create consciousness in machines doesn't that prove once and for all that it is not some divine gift for humans?

The Herald: Robots with human-like intelligence and consciousness are 'possible' - but the consequences are uncertainRobots with human-like intelligence and consciousness are 'possible' - but the consequences are uncertain (Image: CornerShopPR)

One of the panellists, Rupert Robson, author of 'The Sentient Robot', notes that we still don't know why consciousness exists.

He said: "If you think about our brain, all sorts of cognitive and emotional functions take place - all sorts of information processing.

"The question is, why doesn't all this information processing go on in the dark, just as it does in a handheld calculator? 

"And yet we know that it doesn't go on in the dark - we're aware of it. That is sentience.

"But it's not absolutely clear what sentience, or 'consciousness', brings to the party because all of that information processing is going on anyway.

"Do AIs or algorithms like ChatGPT and GPT4 have sentience or consciousness?

"Absolutely not - yet.

"Is there a likelihood that we will be able to figure out consciousness in order to embed it into robots? Yes, that is possible.

"But it's not going to happen by accident. It's going to happen because we've designed it into the robot."

READ MORE: Why artificial intelligence will be key to NHS workforce and waiting times challenges 

For his part, Robson thinks sentience could be the thing that actually saves us from a Terminator-style doomsday.

"Make no mistake, we will develop - in the fullness of time - really super-clever robots, with a much greater breadth of intelligence than ChatGPT, and at that point we have a danger - a risk to ourselves - and we need to mitigate that risk.

"I think sentience is a way of doing that.

"If [the robots] see the world through our eyes, if they are able to empathise with us because they have sentience, then I think there is an argument - a good argument - that we stand a greater chance of them being friendly to us, rather than hostile."

Back in the more mundane world of healthcare, Dr Cian O'Donovan, a researcher at University College London, is concerned with making sure that we harness AI to our benefit - not to replace staff, but to free up clinicians and carers to spend more time with patients.

He said: "It's not simply a matter of 'the robots are coming and taking all the jobs' - the robots are coming, that means we've got to think really hard about training.

"Patients will benefit if robotics and automation technologies allow them to spend more time with human carers."

The Herald: Maximising the amount of time for human-to-human contact in care is seen as one of the potential benefits of AI and automationMaximising the amount of time for human-to-human contact in care is seen as one of the potential benefits of AI and automation (Image: PA)

O'Donovan cautioned that AI is "not a panacea" for workforce shortages if we fail to plan for an ageing population. 

He added: "There's a danger that because of the successes - or perceived successes - in places like diagnostics or in replicating chess players, that we're too quick in projecting those successes into other areas.

"Thinking about wards, thinking about care homes, these environments are so unpredictable and so far removed from the board games, from the X-ray labs or, in the case of robots, from the factory floor. 

"I don't think that's fully costed in by governments thinking that AI technologies are the future across the board."

 

'Sentient Robots: Friends or Monsters?' is at the Bayes Centre, Edinburgh on Tuesday April 11

'Can Robots Care?', with Dr Cian O'Donovan, is at the Bayes Centre on Wednesday April 12

'Games Robots Play' is at the Bayes Centre on Thursday April 13