Humanity scientists. It looks like a doomsday

Published by admin on

Humanity has survived
various threats since its existence such as earthquakes, wars, ice age and
other natural disasters but it can be about to face a threat it has never faced
before: Artificial Intelligence. AI (Artificial Intelligence ) takes in huge
amounts of information from a specific domain and uses it to make a decision in
a specific case in the service of a specified goal. It is a technology which
can be very practical in fields like medicine, science, food industry and in
everyday life. It is one of the biggest achievements of mankind. However, it
has downsides along with its upsides. AI may sound pretty harmless and useful
but with today’s development speed and scientists high enthusiasm, it is
possible that things can go wrong and humanity may have to deal with couple of
problems like future AI takeover, potential increase of unemployement due to AI
use and mental-physical laziness based upon the loss of human effect in daily

            The biggest threat of all is the possibility of AI taking
over the management of the world. It sounds like a science fiction movie
scenario but people would not believe if someone mentioned smartphone a hundred
years ago. Today, AI is one of the fastest improving technology and some
scientists think that the impovement speed is making AI the biggest existential
risk. Nick Bostrom, who founded and directs the Future of Humanity Institute,
stated “I always thought superintelligence was the biggest existential risk because
of the rate of progress.”. Also Stephen Hawking, one of the most well-known and
respected scientists, warned humanity about the danger of AI reaching a new form
of life that will outperform humans. “I fear that AI may replace humans
altogether. If people design computer viruses, someone will design AI that
improves and replicates itself.” said Stephen Hawking during an interview with
Wired. In addition to these statements, an AI experiment at Facebook AI
Research Lab (FAIR) draw attention recently. Two AI robots began chatting to
each other in a language of their own. This is where the singularity term comes
to mind. An AI powered system has an ability to reach a singularity point where
it becomes able to program itself, learn itself and implement itself. This is
the edge AI has over humans according to scientists. It looks like a doomsday
scenario but Nick Bostrom explained how a simple task given to AI can turn to a
disaster. The dangers come not necessarily from evil motives, says Bostrom, but
from a powerful, wholly nonhuman agent that lacks common sense. Imagine a
machine programmed with the seemingly harmless, and ethically neutral, goal of
getting as many paper clips as possible. First it collects them. Then.
realizing that it could get more clips if it were smarter, it tries to improve
its own algorithm to maximize computing power and collecting abilities.
Unrestrained, its power grows by leaps and bounds, until it will do anything to
reach its goal: collect paper clips, yes, but also buy paper clips, steal paper
clips, perhaps transform all of earth into a paper-clip factory.
“Harmless” goal, bad programming, end of the human race. Scientists
should take this seriously and be very careful about the programming and about
the future of AI.

Categories: Industry


I'm Iren!

Would you like to get a custom essay? How about receiving a customized one?

Check it out