Lethal armed robots which could target and kill humans autonomously should be banned before they are used in warfare, campaigners have said.

The potential future weapons would be able to select a victim on the battlefield without any human intervention, crossing moral and legal boundaries, Human Rights Watch warned.

The so-called 'killer robots' could also be adopted by autocrats to use in deadly attacks on their own people and would be incapable of exercising compassion, campaigners say.

Steve Goose, director of the arms division at Human Rights Watch, said: "Lethal armed robots that could target and kill without any human intervention should never be built.

"A human should always be 'in-the-loop' when decisions are made on the battlefield.

"Killer robots would cross moral and legal boundaries, and should be rejected as repugnant to the public conscience."

He spoke out as the organisation launched its global Stop Killer Robots campaign calling for a pre-emptive and comprehensive ban on fully autonomous weapons.

It suggests the prohibition could be achieved through an international treaty and national laws.

During the past decade, the use of unmanned armed vehicles or drones has dramatically changed warfare.

Now rapid advances in technology permits nations with high-tech military capabilities - including the UK, US, China, Israel and Russia - to move towards systems that would provide greater combat autonomy to machines, Human Rights Watch said.

But if one or more country chooses to deploy such weapons, others may feel compelled to abandon policies of restraint, leading to a robotic arms race, it argues.

Mr Goose said: "Many militaries are pursuing ever-greater autonomy for weaponry, but the line needs to be drawn now on fully autonomous weapons.

"These weapons would take technology a step too far, and a ban is needed urgently before investments, technological momentum, and new military doctrine make it impossible to stop."

Mr Goose added that the prospect of a killer robots arms race was "frighteningly likely".

Supporters of the campaign believe the machines would lead to more civilian "collateral damage", more conflicts being triggered or escalated, a loss of accountability in war, and an uncontrollable arms race.

The Terminator films envisage a horrifying future in which thinking military machines turn on their creators.

That much is sci-fi fantasy, but military technology is marching rapidly towards drone aircraft and other weapons that can operate with little or no human intervention.

The US X-47B, a fast jet drone designed to carry out missions from aircraft carriers, can fly by itself without the need for a remote human "pilot".

It has undergone sea trials and could be replacing US fighter jets in the Pacific in 2020, according to artificial intelligence expert Professor Noel Sharkey, from the University of Sheffield, who chairs the International Committee for Robot Arms Control (ICRAC).

Another hi-tech drone due to undergo flight tests this year is the supersonic British Taranis from BAE Systems. Named after the Celtic god of thunder, it is designed to operate without human control for much of the time and to attack both other aircraft and ground targets.

Crucially, however, none of these weapons yet have the ability to engage an enemy on their own.

Speaking at the campaign's launch at the Front Line club in London today, Prof Sharkey said he was "shocked" by the US military's lack of understanding about the limits of robotic technology.

"Using such weapons against an adaptive enemy in unanticipated circumstances and in an unstructured environment would be a grave military error," he said. "Computer-controlled devices can be hacked, jammed, spoofed or can be simply fooled and misdirected by humans.

"I'm sure robots in the next 20 years or so will be able to do some discrimination between maybe a tank and a school bus. But the problem is really the kind discrimination that is needed on the battlefield to discriminate between a combatant soldier and a civilian, particularly in insurgent warfare.

"Human soldiers can use their own judgment. They can smell their intuition, and robots can't do that at all. In a hundred years, who knows, but that's not the point. A lot of people will die in the meantime."

American Nobel laureate Jody Williams, awarded the peace prize in 1997 for helping to bring about an international ban on anti-personnel landmines, said war would just become too easy to wage with killer robots.

"It's already too easy to go to war," she said. "If war is reduced to weapons working with no human beings in control it is going to be civilians who are going to be, even more than now, bearing the brunt of warfare.

"It crosses a moral and ethical boundary that should never be crossed. We believe it would totally transform the face of warfare and create a new weapons race that we do not need."

The campaigners pointed out that there was already a precedent for action outlawing a weapon before it has even been built. A United Nations ban on the use of blinding laser weapons came into force in 1998.

A Ministry of Defence (MoD) spokesman said: "The MoD has no intention of developing any weapons systems that are used without human involvement.

"Although the Royal Navy does have defensive systems, such as Phalanx, which can be used in an automatic mode to protect personnel and ships from enemy threats like missiles, a human operator oversees the entire engagement.

"Furthermore, all of our Remotely Piloted Aircraft Systems used in Afghanistan to protect troops on the ground are controlled by highly-trained military pilots.

"There are no plans to replace skilled military personnel with fully autonomous systems."