Part 3: So, who’s responsible for all this fancy tech?

Artificial Intelligence is already responsible for making a number of decisions that have implications on real people’s lives. There is an ongoing debate, with people discussing how much control should be given to these machines and whether it is at all ethical that these machines have so much power over human lives.

Technology is something that has always been very difficult to govern, there is no better example of this than the internet. The internet, is something nobody really predicted, it has taken of and become a massive entity of communication, participation, and more. It was hard to predict what would come from it and near impossible to police. When asked whether the internet would or has become unmanageable by humans, Sandra Braman – an expert on the effects of digital technology on society –  replied yes. She explained that because of the way it is increasingly complex and evolving it has reached “a moment of ungovernability”. Current laws are still not fully reflective of the capabilities of the internet but are slowly adapting. One example of this is Cyber Bullying, it is one term used to describe an incredibly broad range of actions, which is both good and bad.

Similarly, we have taken a backseat in governance and policing of AI technology. This has been great for the development of technology but it has also left a gaping hole in what is legal and ethical. One example that really gripped me is in an episode of The Good Wife , they are given a complex lawsuit where a self-driving car, with a driver/passenger is involved in an accident. They are faced with the problem of who to sue on behalf of the injured party; who takes responsibility, is it the driver who wasn’t actually driving the car, the manufacturers, who should have fixed the faults or the programmers?

We need to take measures and creating clear laws and regulation that define who takes responsibility for this AI technology, in the case of events such as this. This has already been started by the Google DeepMind, which have created an AI ethics board to begin addressing the issues confronted by AI technology.

In an article about AI decision making Byron Spice says “Machine-learning algorithms increasingly make decisions about credit, medical diagnoses, personalized recommendations, advertising and job opportunities, among other things”. Which is a surprising amount of things considering most companies are not transparent or upfront about their use of the decision making algorithms. The article also says the algorithms can “introduce or perpetuate racial or sex discrimination or other social harms”, which highlights the dangers of AI. In some ways it takes out the humanity of the decision making process. Yes, it means that most of the decisions will based on facts and statistics but sometimes an individual situation is difficult to articulate through pure data and needs that extra human interaction to be fully appreciated.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s