Now, through an open letter, the companies have reunited to make a commitment not to create “deadly autonomous weapons.” The document was signed by Elon Musk himself and executives such as Shane Legg, Mustafa Suleyman and Demis Hassabis, founders of DeepMind, Google’s artificial intelligence subsidiary. In the text, they argue that “artificial intelligence is poised to play a growing role in military systems.” However, the group understands that “the decision to take a human life should never be delegated to a machine”. The paper also argues that in selecting targets without human intervention, autonomous deadly weapons would be “destabilizing for each country and individual.” To address the problem, the group calls on governments to help create “a future with strong international norms, regulations and laws against deadly autonomous weapons.” The letter was organized by the Future of Life Institute, which aims to “mitigate the existential risk” to humanity. The association also sent the letter to the UN for the ban on robotic weapons. The new document was signed by 170 organizations and 2464 experts from 26 countries. It is not yet possible to know if the movement will have significant results, but, together, companies and researchers can be more successful. As the tech giant Google’s own principles, for example, were only defined after pressure from officials who asked the company to get out of the “war business.” So, what do you think about this? Simply share all your views and thoughts in the comment section below.

Δ

Elon Musk  DeepMind And More Pledge Not To Develop Killer AI Weapons - 65Elon Musk  DeepMind And More Pledge Not To Develop Killer AI Weapons - 44Elon Musk  DeepMind And More Pledge Not To Develop Killer AI Weapons - 13