Week 9 Readings

People strive for the creation of an ethically correct AI despite our inability to truly embody what it means to be ethically correct. Are we then able to create AI with ethically correct foundations? There might not be a perfect solution to address this issue but Nilani has taken steps to introduce and establish definitions of what it means for AI to be ethically correct and what issues can arise in multiple realms of AI from the lack thereof. 

“In the very process of creating technology it loses its neutral standing. It no longer remains just another means to an end but becomes the living embodiment of the opinions, awareness and ethical resolve of its creators. Thus ethics of a technology (or product) starts with the ethics of its creation, and its creators.” – NIlani, The Hitchhiker’s Guide to AI Ethics

Nilani has attempted to discuss ethics in terms of bias in a model’s prediction, fairness of the outcome, approaches to accountability and transparency. AI amplifies human biases through numerous factors and reflects back human prejudices that are deeply embedded into our societal structures. This was interesting because I never considered other factors beyond ‘biased data’ creating unwanted results in AI. The entirety of the process and people behind AI have the potential to create bias in some extent. 

But biased data is only part of the story. Bias can seep into a machine’s intelligence from other sources too; all the way from how AI researchers (read humans) frame the problem to how they train the model to how the system gets deployed. Even with unbiased data, the very process by which some machine learning models achieve accuracy can result in biased outcomes.”- NIlani, The Hitchhiker’s Guide to AI Ethics

Ethical issues surrounding AI also arises from its inherent “black box”, companies refuse to disclose the inner workings of their model and as to why it works. Nilani posits that this is  partly due to large complex math operations locked in machine learning. This is intriguing, in the previous reading on Machine Bias for convicts was an apt example on the lack of transparency in determining a defendants’s risk assessment. 

The discussions  about ethics is fun and all but what happens when we need to apply it into systems of algorithmic regulations that operate in a fast-paced environment?

“a successful algorithmic regulation system has the following characteristics:

A deep understanding of the desired outcome

Real-time measurement to determine if that outcome is being achieved

Algorithms (i.e. a set of rules) that make adjustments based on new data

Periodic, deeper analysis of whether the algorithms themselves are correct and performing as expected.” – Tim O’ Riley, Open Data and Algorithmic Regulation

O’ Riley attempts to explain algorithmic regulation beyond metaphors, by grounding this metaphor into real life cases. One of which is the invention of new financial instruments 

implemented by algorithms that trade at electronic speed. However, he then posits:

“How can these instruments be regulated except by programs and algorithms that track and manage them in their native element in much the same way that Google’s search quality algorithms, Google’s “regulations”, manage the constant attempts of spammers and black hat SEO experts to game the system?” – Tim O’ Riley, Open Data and Algorithmic Regulation

This is intriguing, O’ Riely points out that after multiple revelations big banks fails to notice that mere periodic bouts of enforcement aren’t enough. There needs to be constant engagement in the area of regulation in ways to quickly change the rules to limit the effects of bad actors. O’ Riley pushes for consequences of bad action to be dealt in a systemic approach, rather than one subjected to haphazard enforcement.

If data is required to be timely, machine readable and complete to produce smart disclosure how can we prevent ethical issues of such as privacy? 

“The answer to this risk is not to avoid collecting the data, but to put stringent safeguards in place to limit its use beyond the original purpose.”  – Tim O’ Riley, Open Data and Algorithmic Regulation

I find this interesting as there are ways to combat this issue on privacy by homogenising the data through the use of anonymity or even keeping data within the required timeframe it is relevant. The purpose of venturing towards this path benefits not only corporations but to provide an elevated approach to sift out problems while simultaneously providing newer services that provides consumer and citizen value. Google managed to implement transparency and oversight through open data which allows for competition. Which also means that the data utilised to make determinations must be auditable and open to public inspection. 

“Given the amount of data being collected by the private sector, it is clear that our current notions of privacy are changing. What we need is a strenuous discussion of the tradeoffs between data collection and the benefits we receive from its use.” – Tim O’ Riley, Open Data and Algorithmic Regulation

I personally think that if measures are taken place to protect personal data, I wouldn’t mind the tradeoff for the benefit it entails. Data to me can be separated into a spectrum on the one end personal data such as my medical documents, financial data, and others that I have online shouldn’t be so easily distributed without my full consent. On the other end, data leans more towards the impersonal that isn’t significant to me, such as my long or short clicks, products I buy online, etc. Perhaps, the apathy I have for impersonal data being collected arises my upbringing. I grew up with technology and was in a way conditioned to accept these notions at face value for the sake convenience and ease that it doesn’t bother me, it was a norm. 

Leave a comment