The cinematic universe is littered with warnings of the
power of AI. From HAL, to Skynet, to Ultron in the forthcoming Avengers movie, the
fear of the singularity – the point where computers can match or outpace the
human mind – appears well rooted in the collective unconscious.
Notably though, historically these franchises have missed
the mark in terms of timing. HAL should have left us floating in space in 2003.
Skynet’s judgement day has been set for 1997, 2003, 2004, and 2011.So what are
the odds of our computers developing minds of their own?
Virtually 100% has to be the answer in this case. Much like
the examples mentioned above, researchers backed by universities, big business,
or other funding sources are pushing us towards a future of intelligent
machines. While the bipedal T-800s of terminator fame are a long way from
reality –one
bipedal robotic marathon is 200m at this stage with many competitors falling
short of the finish line – HAL,a computer with a mind, may not be that far
off.
HAL is an early representation of what has come to be known
as a cognitive computer[1]. Despite
rumours of 2001’s HAL being a dig at IBM – HAL is one letter ahead - his name officially comes
from the term Heuristic Algortihm and neatly captures the essence of
cognitive computing: cognitive computers learn through experience (heuristics).
Ironically, despite the rumours of HAL being one step ahead
of IBM, IBM are now leading cognitive computing solutions with WATSON, a
machine best known for outperforming 2 human champions on the American gameshow Jeopardy.
While outperforming human competitors on a game show is an
impressive parlour trick, IBM has their eye on something bigger than Jeopardy prize
money. WATSON is seen as a key contributor to a computing revolution that is
aimed at helping rather than harming mankind through business and other
applications.
Locally, co-development
of WATSON has been mentioned as part of CMDHB’s project Swift. In line with
the Heurisitc Algroithms that gave HAL his name, WATSON learns through
experience. Presumably in exchange for access to CMDHB’s data, WATSON will be
able to provide clinicians with recommended actions and likely diagnoses based
on the experience gleaned from the big data. In conjunction with existing
robotic initiatives, WATSON has the potential to form a key part of
addressing our healthcare concerns related to the aging population.
Ultimately then it seems that the development of intelligent
machines is shaping up to be a good thing. Why then have some academics,
including Stephen Hawking,
proposed that this may actually be a concern, a threat to our livelihoods and
existence?
With machines that can learn, change, and evolve at
timescales that far outpace the limits of our slow biological evolution and caveman
hardware, it would be naïve to think that AI could not surpass human capability
once we let the AI genie out of the bottle. With the new breed of cognitive
computers beginning to string
together simple sentences, fears of being outsmarted by computers, and perhaps put out of a job,
are understandable.
If the projected growth of cognitive computing matches Jeremy
Howard’s expectations of the next 5 years, the potential for change in the
world is equal parts exciting and terrifying. The real question is how reliable
these estimates are, especially given the lack of flying cars, hover boards,
and robot butlers we have been promised so many times before. Notably, while
we are making great strides in well controlled environments, AI still has a
hard time of making sense of the somewhat messier real world. Indeed, the
median view of those participating in the AI community is that Human like AI is
still
decades away.
For now at least we can take comfort in the fact that the
academic world at large is divided on the threat and/or the immediacy of the
threat of artificial intelligence. Furthermore, some working with AI see its
development as a non-issue[2]. While
computers may be able to beat us at jeopardy, pass the Turing test by
impersonating a 15 year old boy and by playing
video games like a human, they cannot yet learn without our assistance or
walk across uneven surfaces on two legs.
Ultimately, artificial intelligence is likely to form less
of a threat and more of an asset. While it may replace some jobs, it also holds
the potential to enable important work to be conducted by more people. In Jeremy
Howard’s TED video, cognitive computers have the potential to allow
non-health workers to assist with creating insights and breakthroughs that can
be implemented by health workers. In a less restrained video
from Ray Kurzweil, expanding our cognitive ability through linking our
brains to AI is considered. This would essentially link advances in computer
intelligence to advances in our own intelligence In this scenario extended
cognitive capacity may even become available through the cloud, making Intelligence
as a service a real possibility[3].
While Intelligence as a Service may sound fanciful, the more
general concept of AI expanding our intelligence is somewhat compatible with a psychological
theories about human cognition. Essentially,
human beings make use of the world around them. As the world provides additional
resources for cognition - from cave walls, to pen and paper, to abacus, to
calculators, to computers, to Watson - we expand our cognitive reach by
outsourcing more of thinking to the environment around us. Essentially our
brains are master tool users with the potential to take-over AI before it has
the chance to take over us.
It seems then that The hopeful answer is that AI will be
another tool that we can harness to our benefit. Failing that, if AI does
outpace us here’s hoping that our experience is more like that of Joaquin
Phoenix and Scarlett Johansen in HER and less the
Avengers in Ultron or
Sarah Connor in the Terminator franchise.
<embed> https://www.youtube.com/watch?v=fk24PuBUUkQ
[1]
For more on cognitive computers, check out Deloitte’s Thought leadership on
this topic:
http://dupress.com/articles/what-is-cognitive-technology/
http://dupress.com/articles/2014-tech-trends-cognitive-analytics/
http://dupress.com/articles/what-is-cognitive-technology/
http://dupress.com/articles/2014-tech-trends-cognitive-analytics/
[2]
Reassuringly, Cognitive Computing expert Professor Mark Bishop argues that
machines are missing the necessary ‘humanity’ (understanding, consciousness
etc.) to make the threat mentioned by Stephen Hawking a reality. However, in
line with scenarios presented in the Terminator, Professor Bishop fears a
future where artificial intelligence applied in a military setting is given
permission to decide whether or not to engage a target. Additionally, assuming
that AI is non-threatening because it does not possess characteristics we see
in people assumes that humanness is necessary for something to become
threatening. Notably some of the most threatening organisms in recent history
also (to our knowledge) lack these human factors (HIV, ebola, swine flu) http://www.independent.co.uk/news/science/stephen-hawking-right-about-dangers-of-ai-but-for-the-wrong-reasons-says-eminent-computer-expert-9908450.html
[3]
If this comes to pass it brings with it wider concerns at the societal level.
If we are able to purchase additional brain power/intelligence at whim, who is
the most likely to access this additional intelligence, what will they use it
for, and what will it mean for the health of our society? Will it be used by
individuals to gain a competitive advantage or by society at large to even the
playing field?
0 comments:
Post a Comment