Films like Ex-Machina (2014) present a chilling dystopia where artificial
intelligence can fool the smartest of humans into thinking that they are real. However,
is this fiction going to become a reality? The climate of technology is
changing at a startling rate. Recently, many large tech giants have all begun
to invest in artificial intelligence (AI) and Machine Learning. In its current
state, AI is now very dependent on ‘Deep Learning’. The term, coined by elon musk, refers to computers that self-teach.
A recent example is ‘DeepMind’
reference, a robot that beat world champion Go player Lee Se-dol. The robot
was coded with the basic rules of Go, and over the course of a couple of weeks,
it ran many millions of simulated games against itself. With each win, it
updated its database and became a better Go player. By the time it eventually
played Se-dol, it won 100-0. The same self-teaching algorithm has been
transferred into a plethora of other applications: namely self-driving cars and
robots that can barter with one another through negotiation.

 

The problem with deep learning is
that the robot writes its own algorithm. We therefore do not know or understand
the complexities of its actions, and are currently years away from developing a
robot that can explain why it does
what it does. This lack of knowledge about AI gives rise to many ethical
issues. Throughout this essay, I will explore the different ethical challenges,
discuss their significance and look at how engineers can fix them.

 

The first problem associated with
such a progressive technology is the dangers brought about as the market tends
towards either a monopoly or oligopoly. In the wrong hands, and if not properly
regulated, AI could be catastrophic. Right now, as demonstrated by Figure 1, there is a race amongst big
tech giants to buy out the smaller up-and-coming companies. These large
companies all have their own agendas: all leading to the same end result – to
make money. Whilst they may not be evil in the conventional sense, their hunger
to reach the top and desire to make money may lead them to make dangerous,
irrational decisions.

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

 

This problem is a very difficult one
to fix. Engineers can potentially break up larger companies, such as google,
into several ‘googlets’, as explained in Newsweek
2. This would stunt the growth of the company, thereby allowing smaller
companies to innovate and develop more before they are gobbled up by the larger
ones. However, there are several drawbacks to doing this. If a company as large
as google is broken up in some way, it could drastically slow the evolution of
artificial intelligence. Hence, a compromise of some sort must be reached.
Furthermore, the ‘googlets’ will eventually buy out one another again, leading
to a large company forming again. Hence, breaking up the company would only
really delay the inevitable. Some
people, however, draw parallels between AI and cars, as an argument for why a
potential monopoly of the market is not a possibility. Much like AI, the market
for cars was at first cornered by a few companies – (Ford, Suzuki, etc), but is
now a multi-billion-dollar market with hundreds of companies all getting a
piece of the pie. Hopefully, AI will follow suit.

 

The second problem is algorithmic irresponsibility. As mentioned above, we do not actually
know why these robots do what they
do. Last year, Facebook had to shut down two machine learning robots because
they were not speaking to one another in a comprehensible language. Whilst the
robots posed no threat, it would have been useful to understand what they were
saying. An example that has created quite an ethical dilemma is the machine
learning robots that are being employed in healthcare. These robots have
demonstrated an aptitude for detecting several illnesses, including schizophrenia
and diabetes, far before any doctors can. However, we do not know their
algorithms. How can we justify letting these robots diagnose and prescribe more
dangerous medicine (morphine, for example)
without knowing their thought process? DeepMind, a company owned by Google, was
found to have broken UK privacy laws in 2016, when it received 1.6 million
patients’ medical records from the NHS, without the consent of the patients. To
make matters worse, the ethical committee that Google has put in place is very
private and alien to the public. No one really knows who sits on it and what
they discuss.

 

This problem is also a difficult one
to address. Firstly, the Google ethical committee must be made more
transparent, with financial sanctions in place if this is not the case. However,
in what has been a step in the right direction, Facebook, Google and Amazon
have formed a partnership aiming to find solutions to the difficult problems
that face AI. With regards to how we can justify letting AI make life-changing
decisions, this is a much more subjective matter. When the AI finally reaches
the testing phase, it will already have encountered most illnesses and learned
how to diagnose them. Even though there is still a capability of it making a
mistake, I think the benefits – potentially discovering life threatening
illnesses early – outweigh the drawbacks – falsely diagnosing or administering
the wrong medicine.

 

Finally, AI threatens the jobs and
inequality within our society. Once self-driving cars are introduced, for
example, all taxi drivers will go out of business. Increased mechanisation will
steal so many jobs, meaning that the revenue made by a certain company will be
split between fewer people. This will increase the gap between the rich and the
poor. AI will most benefit the people who funded its creation.

 

 

 

Categories: Articles

x

Hi!
I'm Garrett!

Would you like to get a custom essay? How about receiving a customized one?

Check it out