Some analysts say those future technologies that we once thought to be nothing more than science fiction are becoming a reality. With autonomous capabilities backed by artificial intelligence, the use of killer robots is a grave concern.
It’s even sparked the attention of the UN, as top minds such as Elon Musk are calling for a ban on these machines.
Musk is just a part of a growing number of some 116 specialists from around the world who are expressing their disdain in the development of the technology by writing letters to the UN.
But with the rapid advancement of technology, and the need to gain the upper advantage on the battlefield, are these groups of specialists fighting a doomed battle?
The U.S. military already has access to drones which carry out airstrikes from a remote location operated by someone.
A new wave of autonomous drones is already primed to conduct missions without the need for human intervention.
With the world’s largest military budget, how easy is it for the U.S. to incorporate artificial intelligence into their weapons program.
Those professionals in the robotics field who are strongly urging for an action plan to present their case at the International Joint Conference on Artificial Intelligence this week.
It’s highly unlikely that we won’t see what we dub as killer robots on the battlefield in the near future, and possibly even in law enforcement in some capacity.
There is a growing need to not only cut down on casualties in warfare but also to reduce costs.
A machine properly maintained can run non-stop for a long time without the need for a vacation or sick leave.
Imagine one of DARPA’s attack robot dogs hunting you down because it thinks you’re a suspect or a person of interest in some investigation.
Or a Terminator type machine with red eyes for laser beams and some advanced form of bulletproof titanium exoskeleton resistant to armor-piercing ammunition.
Sounds a bit way off to some people, but not industry trendsetters like Mustafa Suleyman the co-founder of DeepMind Technologies which is an a.i machine learning company acquired by Google.
Fully autonomous killer machines as it stands, in theory, is a potential threat to humanity. We have the knowledge and the means to destroy the entire planet many times over with thermonuclear weapons.
It seems as if the accidental deployment of nukes is an immediate concern on the world stage. Just take a look at the past few weeks with the vast amount of bellicose rhetoric over the tiny island of Guam.
We live in an age where autonomous weapons are at the forefront of world leader’s minds. With the ability to devastate an opponent while minimizing the loss of staff, passing up on this technology might not be on the table.
Technology is a double-edged sword in some respects. It can be used to empower humanity but also to destroy an entire civilization.
It would be far better to have responsible and capable minds who are taught the way to true peace to avoid the many horrors we see today.