It’s the stuff of science fiction: machines that make decisions about who and when to kill. Referred to as “autonomous weapons”, they’re already in use to some degree. But as more sophisticated systems are being developed we wanted to an expert in the field about whether such systems comply with international humanitarian law and what it means for humanity to give machines the power over human life and death.
The great data gold rush
In the marketing world, data is sometimes called the “new gold” because it’s a precious tool when it comes to aiming products, services or messages directly at people whose on-line behavior and personal data indicates they are a potentially receptive target.
That goes beyond money. It can be used as a tool in law enforcement, to amass political power, or even as a weapon of war. For millions of people around the world, data simply makes life better, linking them quickly with people, services, and pleasures such as art and music.
But data can also save lives. Humanitarian organizations use it to respond more effectively to crises such as natural disaster and conflict and to track and keep people safe from fast-spreading diseases such as Cholera, Ebola and Covid-19.
But what are the limits and pitfalls of using massive amounts of data in a global market in which there are few global standards — and no universal binding regulations — on the collection, use, storage, sale and protection of data? While some countries and regions have data protection laws (such as the European Union’s General Data Protection Regulation or GDPR), there’s no global data–protection mechanism.