Is AI-Driven Surveillance Beneficial and what are its Implications on Privacy?

The Issue

AI has become ubiquitous and pervasive. Its applicability spans all sectors of our daily interactions. Despite the growing implementation of various AI-driven technologies, laws and regulations have in most part, lagged, creating a legislation loophole. The absence of concrete standards has been disastrous as unethical behaviours in the implementation and fielding of some AI technologies have persisted, leading to privacy violations and harm, either physical or psychological. Most of the countries worldwide have realised the grave danger that unregulated AI poses to their citizens and the overall national security of these individual countries. In the wake of these concerns, countries have geared towards legislation to standardise and scrutinise the application of AI in real life to ensure they are safe and adhere to what would be considered ethical application.

The Rise of AI regulations

The inherent risks posed by the unregulated AI application have required governments around the world to rise to the occasion to institute necessary legislation. Most of these countries are considered developed nations where the uptake, use and implementation of AI technology is much higher. Leading the way is the US, which has had several problematic encounters with AI technology in various sectors, especially law enforcement. As more concerns get raised by stakeholders, the US introduced draft rules in January of 2020 which are meant to regulate AI. The draft regulations encourage AI innovation by removing the associated bottlenecks and set a standard from which government agencies will approach AI.
Similarly, the EU has recognised that it has lagged in implementing standards to regulate the application of AI technology. EU recognises that unregulated AI technology poses a threat to the entire population, especially children who are vulnerable to exploitation by rogue AI applications. The growing adoption of AI in critical infrastructures such as health, immigration and communication increase the inherent risks involved, especially due to data privacy violations. EU officials recognise that the absence of regulations increases the risks of misuse of AI and serves as a hindrance to the adoption of AI due to privacy fears.
In Australia, AI regulations have not yet materialised by the need for such regulations is apparent. The robodebt scandal has become a turning point highlighting the disastrous outcome of unregulated AI application. In this instance, the government has been forced to offer compensation of nearly $1billion in addition to legal costs to settle a class action lawsuit, which highlights how unregulated AI could contribute to disastrous administrative decisions and the repercussion of such.

Our View

The move by the EU to formulate AI regulations is welcome as, without such, the expansive AI field will plunge into chaos resulting from unethical AI applications. The pervasiveness of AI technology, especially the amount of private data involved, should be a point of concern if there is no legislation to regulate what happens to this data. In the case of Australia, the robodebt scandal should serve as a wake-up call to the government and all the other stakeholders on the potential disasters that awaits their failure to institute legislation to regulate AI before it’s too late.
Sarah Klain

Written by:

Sarah Klain