尊敬的 微信汇率:1円 ≈ 0.046166 元 支付宝汇率:1円 ≈ 0.046257元 [退出登录]
SlideShare a Scribd company logo
Written by
Advanced science and the future of
government
Robots and Artificial Intelligence			 Genomic Medicine				 Biometrics
WORLD GOVERNMENT SUMMIT THOUGHT LEADERSHIP SERIES
Cover image - © vitstudio/Shutterstock
While every effort has been taken to verify the
accuracy of this information, The Economist
Intelligence Unit Ltd. cannot accept any
responsibility or liability for reliance by any person
on this report or any of the information, opinions
or conclusions set out in this report.
Advanced science and the future of government
© The Economist Intelligence Unit Limited 2016 1
Robots and Artificial Intelligence Genomic Medicine Biometrics
Contents
Introduction  4
Chapter 1: Robots and Artificial Intelligence  11
Chapter 2: Genomic Medicine  36
Chapter 3: Biometrics  60
Advanced science and the future of government
© The Economist Intelligence Unit Limited 20162
Robots and Artificial Intelligence Genomic Medicine Biometrics
Foreword
Advanced science and the future of government is an Economist Intelligence Unit report for the 2016
World Government Summit to be held in the UAE. The report contains three chapters:
1.	 Robots and Artificial Intelligence
2.	 Genomic Medicine
3.	 Biometrics
The findings are based on an extensive literature review and an interview programme conducted by
the Economist Intelligence Unit between September-December 2015. This research was commissioned
by the UAE Government Summit. The Economist Intelligence Unit would like to thank the following
experts who participated in the interview programme.
Robots and Artificial Intelligence
Frank Buytendijk – Research VP  Distinguished Analyst, Gartner
Dr Andy Chun – Associate Professor, Department of Computer Science, City University Hong Kong
Tom Davenport – President’s Distinguished Professor of Information Technology  Management,
Babson College
Martin Ford – Author, Rise of the Robots: Technology and the Threat of a Jobless Future and winner of the
Financial Times and McKinsey Business Book of the Year award, 2015
Sir Malcolm Grant CBE – Chairman of NHS England
Taavi Kotka – Chief Information Officer, government of Estonia
Paul Macmillan – DTTL Global Public Sector Industry Leader, Deloitte
Liam Maxwell – Chief Technology Officer, UK Government
Prof Jeff Trinkle – Director of the US National Robotics Initiative
Gerald Wang – Program Manager for the IDC Asia/Pacific Government Insights Research and Advisory
Programs
Genomic Medicine
Karen Aiach – CEO, Lysogene.
Dr George Church – Professor of Genetics at Harvard Medical School and Director of PersonalGenomes.
org.
Dr Bobby Gaspar – Professor of Paediatrics and Immunology at the UCL Institute of Child Health and
Honorary Consultant in Paediatric Immunology at Great Ormond Street Hospital for Children.
Dr Eric Green – Director of the National Human Genome Research Institute
Dr Kári Stefánsson – CEO, deCODE
Dr Jun Wang – former CEO, the Beijing Genomics Institute
Advanced science and the future of government
© The Economist Intelligence Unit Limited 2016 3
Robots and Artificial Intelligence Genomic Medicine Biometrics
Biometrics 				
Dr Joseph Atick – Chairman of Identity Counsel International
Daniel Bachenheimer – Technical Director, Accenture Unique Identity Services
Kade Crockford – ACLU Director of Technology for Liberty Program
Mariana Dahan – World Bank Coordinator for Identity for Development
Dr Alan Gelb – Senior Fellow at the Center for Global Development
Dr Richard Guest – Senior Lecturer in Computer Science at the University of Kent
Terry Hartmann – Vice-President of Unisys Global Transportation and Security
Georg Hasse – Head of Homeland Security Consulting at Secunet
Jennifer Lynch – Senior Staff Attorney at the Electronic Frontier Foundation
C. Maxine Most – Principal at Acuity Market Intelligence
Dr Edgar Whitley – Associate Professor in Information Systems at the LSE
The Economist Intelligence Unit bears sole responsibility for the content of this report. The findings
and views expressed in the report do not necessarily reflect the views of the commissioner. The report
was produced by a team of researchers, writers, editors, and graphic designers, including:
Conor Griffin – Author and editor (Robots and Artificial Intelligence; Genomic Medicine)
Adam Green – Editor (Biometrics)
Michael Martins – Author (Biometrics)
Maria-Luiza Apostolescu – Researcher
Norah Alajaji – Researcher
Dr. Bogdan Popescu - Adviser
Dr Annie Pannelay – Adviser
Gareth Owen – Graphic design
Edwyn Mayhew - Design and layout
For any enquiries about the report, please contact:
Conor Griffin
Principal, Public Policy
The Economist Intelligence Unit
Dubai | United Arab Emirates
E: conorgriffin@eiu.com
Tel: + 971 (0) 4 433 4216
Mob: +971 (0) 55 978 9040
Adam Green
Senior Editor
The Economist Intelligence Unit
Dubai | United Arab Emirates
E: adamgreen@eiu.com
Tel: + 971 (0) 4 433 4210
Mob: +971 (0) 55 221 5208
Advanced science and the future of government
© The Economist Intelligence Unit Limited 20164
Robots and Artificial Intelligence Genomic Medicine Biometrics
Introduction
Governments need to stay abreast of the latest developments in science and technology, both to
regulate such activity, and to utilise the new developments in their own service delivery. Yet the pace
of change is now so rapid it can be difficult for policymakers to keep up. Identifying what developments
to focus on is a major challenge. Some are subject to considerable hype, only to falter when they are
applied outside the laboratory.
Why focus on robots and AI, genomic medicine, and
biometrics?
This report focuses on three advances which are the subject of considerable excitement today:
robots and artificial intelligence (AI); genomic medicine; and biometrics. The three share common
characteristics. For instance, they all run on data, and their rise has led to concerns about privacy
rights and data security. In some cases, they are progressing in tandem. Genomic medicine is
generating vast amounts of DNA data and practitioners are using AI to analyse it. AI also powers
biometric facial and iris recognition.
These are not the only developments that are relevant to governments, of course. Virtual reality
headsets embed a user’s brain in an immersive 3D world. Surgeons could use them to practise risky
surgeries on human-like patients, while universities are already using them to design enhanced
classes for students. 3D printing produces components one layer at a time, allowing for more intricate
design, as well as reducing waste. Governments are starting to use the technology to “print” public
infrastructure, such as a new footbridge in Amsterdam, designed by the Dutch company MX3D.
Nanotechnology describes the manipulation of individual atoms and molecules on a tiny scale – one
nanometer is a billionth of a metre. Nanoscale drug delivery could target cancer cells with new levels
of accuracy, signalling a major advance in healthcare quality. Brain-mapping programmes like the US
government-funded BRAIN initiative could allow mankind to finally understand the inner workings of
the human brain and usher in revolutionary treatments for conditions such as Alzheimer’s disease and
depression.
However, robots and AI, genomic medicine, and biometrics share three characteristics which
mark them out as especially critical for governments. First, all three offer a clear way to improve, and
in some cases revolutionise, how governments deliver their services, as well as improving overall
government performance and efficiency. The three developments have also been trialled, to a certain
extent, and so there is growing evidence on their effectiveness and how they can be best implemented.
Finally, they are among the most transformative developments in terms of the degree to which they
could change the way people live and work.
Advanced science and the future of government
© The Economist Intelligence Unit Limited 2016 5
Robots and Artificial Intelligence Genomic Medicine Biometrics
1. Robots and AI – Their long-heralded arrival is finally here
Robots and artificial intelligence (AI) can automate and enhance work traditionally done by humans.
Often they operate together, with AI providing the robot with instructions for what to do. Google’s
driverless cars are a much-cited example.
The subject is of critical importance for governments. Robots are moving beyond their traditional
roles in logistics and manufacturing and AI is already far more advanced than many people realise
– powering everything from Apple’s personal assistant, Siri, to IBM’s Watson platform. Much of
today’s AI is based on a branch of computer science known as machine learning, where algorithms
teach themselves how to do tasks by analysing vast amounts of data. It has been boosted by rapid
expansions in computer processing power; a deluge of new data; and the rise of open-source software.
Today, AI algorithms are answering legal questions, creating recipes, and even automating the writing
of some news articles.
Robots and artificial intelligence – A combined approach
Source: EIU
Robots and AI have the potential to greatly enhance the work of governments and the public sector,
by supporting automation, personalisation, and prediction. Automated exam grading can free up
human teachers to focus on teaching, while automated robot dispensaries have reduced error rates
in pharmacies. Governments can emulate Netflix, an online video service, by using AI to personalise
the transactional services they provide to citizens. Crime-prediction algorithms are allowing police to
intervene before a crime takes place.
Some worry about a future era of “superintelligence”, led by advanced machines that are beyond
the comprehension of humans. Others worry, with good reason, about the nearer-term effects on jobs
and security. As a result, governments need to strike the right balance between supporting the rise of
robots and AI, and managing their negative side effects.
Capable of doing the knowledge work
traditionally done by humans.
Can provide instructions to the robot
for what to do.
Capable of doing the manual work
traditionally done by humans.
Can take action based on the instructions.
Artificial intelligence Robots
Advanced science and the future of government
© The Economist Intelligence Unit Limited 20166
Robots and Artificial Intelligence Genomic Medicine Biometrics
How will robots and AI benefit governments?
Source: EIU
2. Genomic medicine – Ushering in a new era of personalisation
Genomic medicine uses an individual’s genome – ie, their unique set of genes and DNA – to
personalise their healthcare treatment. Genomic medicine’s advance has been boosted by two major
developments. First, new technology has made it possible, and affordable, for anybody to quickly
map their own genome. Second, new gene-editing tools allow practitioners to “find and replace” the
mutations within genes that give rise to disorders.
Initiatives for sequencing genomes around the world
Source: EIU
3
key
benefits
Automatio
n Per
sonalisation
Prediction and
prevention
Administration
Health 
social care
Education Justice 
policing
Transactional
services
Transport 
emergency
response
1990-2003 2005- 2008-2015 2013- 2013- 2013-
Human Genome
Project
An international research
collaboration to carry out
the first ever sequencing
of the human genome.
An international research project
that sequenced more than
2,500 genomes and
identified many rare variations.
The Harvard-led project aims to sequence
and publish the genomic data
of 100,000 volunteers.
A 4-year project led to sequence
100,000 genomes from UK NHS
patients with rare diseases and
cancers, and their families.
A project to sequence up to 500
individuals from Qatar, Bahrain,
Kuwait, UAE, Tunisia,
Lebanon, and KSA.
A 5-year project to analyse
more than 20,000 Saudi genomes
to better understand the
genetic basis of disease.
Personal Genome
Project
100,000 Genomes
Project
Saudi Human
Genome Program
Genome Arabia
1,000 Genomes
Project
Start date
Advanced science and the future of government
© The Economist Intelligence Unit Limited 2016 7
Robots and Artificial Intelligence Genomic Medicine Biometrics
Much of genomic medicine is relatively straightforward. Rare disorders caused by mutations
in single genes are already being treated through gene editing. In time, these disorders may be
eradicated altogether. For common diseases, such as cancer, patients’ genomic data could lead to more
sophisticated preventative measures, better detection, and personalised treatments.
Other potential applications of genomic medicine are mind-boggling. For instance, researchers are
exploring whether gene editing could make animal organs suitable for human transplant, and whether
“gene drives” in mosquito populations could help to eradicate malaria. The fast pace of development
has given rise to ethical concerns. Some worry that prospective parents may try to edit desirable traits
into their embryos’ genes, to try to increase their baby’s attractiveness or intelligence, for example.
This, critics argue, is the fast route back to eugenics and governments need to respond appropriately.
How will genomic medicine affect healthcare?
Source: EIU
3. Biometrics – Mapping citizens, improving services
A biometric is a unique physical and behavioural trait, like a fingerprint, iris, or signature. Unique to
every person, and collectable through scanning technologies, biometrics provides every person with a
unique identification which can be used for everything from authorising mobile phone bank payments
to quickly locating medical records after an accident or during an emergency.
Humans have used biometrics for hundreds of years, with some records suggesting fingerprint-
based identification as far back as the Babylonian era of 500 B.C. But its true scale is only now being
realised, thanks to rapid developments in technology and the growing need for a more secure and
efficient way of identifying individuals.
From a landmark national identification initiative in India to border control initiatives in Singapore,
the US and the Netherlands, biometrics can be used in a wide range of government services. It is
improving the targeting of welfare payments; helping to cut absenteeism among government workers;
and improving national security. However, its use raises ethical challenges that governments need
to manage – privacy issues, the risk of “mission creep”, data security, public trust, and the financial
sustainability of new technology systems. How can governments both utilise the benefits of biometric
tools and manage the risks?
The challenge How can genomic medicine help?
Rare disorders (eg, cystic fibrosis)
Common diseases (eg, cancer, alzheimer's)
Epidemic diseases and a lack of organ donors
Diagnosing, treating and eradicating
Enhancing screening, prevention and treatment
Gene drives and next-gen transplants
Advanced science and the future of government
© The Economist Intelligence Unit Limited 20168
Robots and Artificial Intelligence Genomic Medicine Biometrics
What is biometrics?
Source: EIU
How is biometrics being used by governments?
Source: EIU
Typesof biometrics
Physiological Behavioural
Vein-pattern Palm-pattern
Facial Fingerprint
Iris DNA
KeystrokeSignature
Voice
Secure digital services
Reducing health costs
Biometric roll calls
Targeted welfare
Virtual justice
Eliminating ghost workers
Biometric elections
Smart borders
Advanced science and the future of government
© The Economist Intelligence Unit Limited 2016 9
Robots and Artificial Intelligence Genomic Medicine Biometrics
How does this report help policymakers?
This report is designed to help policymakers in three ways. First, robots and AI, genomic medicine, and
biometrics are technical topics and it can be difficult for non-experts to understand exactly what they
are. What’s more, they are often poorly explained in the media articles that report on them. This can
lead to misunderstandings, particularly when it comes to the risk of imminent negative consequences.
This report aims to address this, by providing a clear and concise overview of what each of the advances
entails, as well as summarising how they have developed to date.
Second, discussions about the impact of robots, AI, genomic medicine and biometrics often
focus on their use in the private sector. However, advances in all three fields could transform how
governments deliver services, as well as enhancing government productivity and efficiency. This report
describes these potential impacts on governments’ work, citing examples from around the world.
Finally, advances in all three areas require a response from governments. In some cases, new
legislation and policies will be needed. For instance, new guidelines are required for storing biometric
data for law-enforcement purposes to guard against the possible targeting of ethnic minorities.
Companies must be forbidden from using citizens’ genomic data to discriminate against them.
Robots and AI will cause some jobs to disappear and so policies such as guaranteed incomes will need
consideration.
In other cases, governments will need to support the advances by unblocking bottlenecks. For
instance, universities and hospitals will need to design new courses for students and staff on how
to use, store, and analyse patients’ genomic data. In certain situations, particularly those involving
ethical issues, the optimal response is unclear and is likely to differ across countries. For instance, how
does AI interpreting data from surveillance cameras affect “traditional” privacy rights? Should the
government support research into the genetic basis of intelligence? This report provides guidance to
government leaders, who must answer these tough questions in the years ahead.
Chapter 1: Robots and Artificial Intelligence
Advanced science and the future of government
© The Economist Intelligence Unit Limited 2016 11
Robots and Artificial Intelligence Genomic Medicine Biometrics
Executive Summary
Background
Jeopardy! is a long-running American quiz show with a famous twist. Instead of the presenter asking
contestants questions, he provides them with answers. The contestants must then guess the correct
question. In 2011, a first-time contestant called Watson shocked viewers when it beat Jeopardy!’s
two greatest-ever champions – who between them had won more than US$5m. Although it sounded
human, Watson was actually a machine created by IBM and powered by AI.
Some dismissed the achievement as trivial. After all, computers have been beating humans at chess
for years. However, winning Jeopardy! was a far bigger achievement. It required Watson to understand
tricky colloquial language (including puns), draw on vast pools of data, reason as to the best response,
and then annunciate this clearly at the right time. Although it was only a TV quiz show, Watson’s
victory offered a vision of the future, where robots and AI potentially carry out a growing portion of the
work traditionally done by humans.
Robots and Artificial intelligence (AI) can automate
and enhance the work that is traditionally done
by humans. Often they operate together, with AI
providing the robot with instructions for what to do.
Google’s driverless cars are a prominent example.
The subject is of critical importance. Robots are
moving beyond their traditional roles in logistics
and manufacturing. AI is already far more advanced
than many people realise – powering everything
from Apple’s personal assistant, Siri, to IBM’s
Watson platform. Much of today’s AI is based on
a field of computer science known as machine
learning, where algorithms teach themselves how
to do tasks by analysing vast amounts of data. It
has been boosted by rapid expansions in computer
processing power; a deluge of new data; and the
rise of open-source software. Today, AI algorithms
are answering legal questions, creating recipes, and
even automating the writing of some news articles.
Some worry about a new era of
“superintelligence”, led by advanced machines
that are beyond the comprehension of humans.
Others worry about the near-term effects on
jobs and security. Critically, however, robots and
AI also have the potential to greatly enhance
government work. Automated exam grading can
free up human teachers to focus on teaching, while
automated robot dispensaries have reduced error
rates in pharmacies. Governments can emulate
Netflix, an online-video service, by using AI to
offer personalised transactional services. Crime-
prediction algorithms are allowing police to
intervene before a crime take place.
This chapter starts with an overview of what
exactly robots and AI are, before explaining why
they are now experiencing rapid uptake, when they
haven’t in the past. It then assesses how robots
and AI can improve the work of governments in
areas as diverse as education, justice, and urban
planning. The chapter concludes with suggestions
for government leaders on how to respond.
Advanced science and the future of government
© The Economist Intelligence Unit Limited 201612
Robots and Artificial Intelligence Genomic Medicine Biometrics
Robots and AI: What are they?
Defining robots and AI is difficult since they cover a vast spectrum of technologies – from the machines
zooming around Amazon’s warehouses to the automated algorithms that account for an estimated
70% of trades on the US stock market.1
One approach is to think in terms of capabilities. Robots are
machines that are capable of automating and enhancing the manual work done by humans. AI is
software that is capable of automating and enhancing the knowledge-based work done by humans.
Often they operate together, with AI providing the robot with instructions for what to do. Robots and
AI do not simply mimic what humans do – they can draw on their own strengths. In some cases, this
allows them to do things that no human, no matter how smart or physically powerful, could ever do.
Robots and artificial intelligence – A combined approach
Source: EIU
Robots – New shapes, new sizes; More automated, more capable
The term “robot” is derived from a Slavic word meaning “monotonous” or “forced labour”, and
gained popularity through the work of science fiction authors such as Isaac Asimov. In the 1950s, the
Massachusetts Institute of Technology (MIT) demonstrated the first robotic arm and in 1961, General
Motors installed a 4,000 lb version in its factory and tasked it with stacking die-cast metal. Over time,
the use of robots in logistics and manufacturing grew. However, their long-heralded entrance into
other sectors, such as fast food and healthcare, is yet to be realised. This looks set to change.
When people think about robots, they typically think about humanoids – ie, those that look and
act like humans. In June 2015, South Korea’s DRC-HUBO humanoid won the annual DARPA Robotics
Challenge after demonstrating an impressive ability to switch between walking and “wheeling”.
However, humanoids remain limited. They are prone to falling over and have trouble dealing with
uncertain terrain. The logic behind developing them is also questionable. While humans can carry out
an impressive range of tasks, we are not necessarily well suited to many of them – our arms are too
weak, our fingers are too slow, and most of us are too big to get into tight spaces. Building robots to
emulate humans might thus be a self-limiting approach.
A separate breed of robot is more promising. These look nothing like humans. Instead, they are
Capable of doing the knowledge work
traditionally done by humans.
Can provide instructions to the robot
for what to do.
Capable of doing the manual work
traditionally done by humans.
Can take action based on the instructions.
Artificial intelligence Robots
Advanced science and the future of government
© The Economist Intelligence Unit Limited 2016 13
Robots and Artificial Intelligence Genomic Medicine Biometrics
designed entirely with their environment in mind and come in many shapes and sizes. Kiva robots
(since renamed as Amazon robotics) look like large ice hockey pucks. They glide under boxes of goods
and transfer them across Amazon’s warehouses. They bear little resemblance to the Prime Air drone
robots that Amazon wants to use to deliver packages; or to the Da Vinci, the world’s most popular
surgical robot, which looks like a set of octopus arms. While they look different, this breed of robot
shares a common goal: mastering a narrow band of tasks by using the latest advancements in robotic
movement and dexterity.
These robots also differ in their degree of automation. Amazon’s Kiva robots operate largely
independently, and few humans are visible in the next-generation warehouses where they operate –
Amazon’s management forecasts that their use will lead to a 20-40% reduction in operating costs.2
By
contrast, the Da Vinci remains directly under the control of human surgeons – essentially providing
them with extended “superarms” with capabilities and precision far beyond their own.
Modern-day robots in action
Source: EIU
Artificial intelligence – Finally living up to its potential?
Today most people come across AI on a daily basis.
It powers everything from Google Translate, to
Netflix’s movie recommendations, to Apple’s
personal adviser, Siri. However, much of this
Amazon robots
Da Vinci
Sawyer
Paro
The robots move around Amazon warehouses independently,
collecting goods and bringing them to human packagers for
dispatch.
Miniaturised surgical instruments are mounted on three
robotic arms, while a fourth arm contains a 3D camera that
places a surgeon inside the patient's body.
A robot produced by Rethink Robotics that is used in factories
to tend to machines and to test circuit boards. Works
alongside humans.
Developed by a Japanese firm called AIST to interact with
patients suffering from Alzheimer's, and other cognition
disorders.
Agrobot
A robot developed by a Spanish entrepreneur that automates
the process of picking fruits.
Spiderbot
A robot created by Intel that is made up 3D-printed
components. Can be controlled via a smartphone or
smartwatch.
Types What are they and what do they do?
AI powers everything from Google
Translate, to Netflix’s movie
recommendations, to Apple’s personal
advisor, Siri.
Advanced science and the future of government
© The Economist Intelligence Unit Limited 201614
Robots and Artificial Intelligence Genomic Medicine Biometrics
AI is “invisible” and takes place behind a computer screen, so many users have little idea that it is
happening.
The field of AI emerged in the 1950s when Alan Turing, a pioneering British codebreaker during the
second world war, published a landmark study in which he speculated about the possibility of creating
machines that could think.3
In 1956, the Dartmouth Conference in the US asked leading scientists to
debate whether human intelligence could be “so precisely described that a machine can be made to
simulate it”.4
At the conference, the nascent field was christened “artificial intelligence”, and wider
interest (and investment) began to grow.
However, the subsequent half-century brought crushing disappointment. In many cases there was
simply not enough data or processing power to bring scientists’ models, and the nuances of human
intelligence, to life. Today, however, many experts believe that we are entering a golden era for AI.
Firms like Google, Facebook, Amazon and Baidu agree and have started an AI arms race: poaching
researchers, setting up laboratories, and buying start-ups.
To understand what AI is and why it is now
developing, it is necessary to understand the
nature of the human intelligence that it is trying
to replicate. For instance, solving a complex
mathematical equation is difficult for most
humans. To do so, we must learn a set of rules and
then apply them correctly. However, programming a computer to do this is easy. This is one reason why
computers long ago eclipsed humans at “programmatic” games like chess which are based on applying
rules to different scenarios.
On the other hand, many of the tasks that humans find easy, such as identifying whether a picture is
showing a cat or a dog, or understanding what someone is saying, are extremely difficult for computers
because there are no clear rules to follow. AI is now showing how this can be done, and much of it is
based on a field of computer science known as machine learning.
Machine learning – The algorithms that power AI
Machine learning is a way for computer programs (or algorithms) to teach themselves how to do tasks.
They do so by examining large amounts of data, noting patterns, and then assessing new data against
what they have learned. Unlike traditional computer programs, they don’t need to be fed with explicit
rules or instructions. Instead, they just need a lot of useful data.
Consider the challenge of looking at a strawberry and assessing whether it is ripe. How can a
machine-learning algorithm do this? First, large sets of “training data” are needed – that is, lots of
pictures of strawberries. If each strawberry is labelled according to its level of ripeness, the algorithm
can draw statistical correlations between each strawberry’s characteristics, such as nuances in size and
colour, and its level of ripeness. The algorithm can then be unleashed on new pictures of strawberries
and can use what it has learned to recognise those that are ripe.
To perform this recognition, machine learning can use models known as artificial neural networks
(ANNs). These are inspired by the human brain’s network of more than 100 billion neurons –
Playing chess is easy for a computer but
difficult for a human. When it comes to
understanding what a person is saying, the
opposite is true.
Advanced science and the future of government
© The Economist Intelligence Unit Limited 2016 15
Robots and Artificial Intelligence Genomic Medicine Biometrics
interlinked cells that pass signals or messages between themselves, allowing humans to think and
carry out everyday tasks. In a (somewhat crude) imitation of the brain, ANNs are built on hierarchical
layers of transistors that imitate neurons, giving rise to the term “deep learning”.
When it is shown a new picture of a strawberry, each layer of the ANN deals with a different
approximation of the picture. The first layer may recognise the brightness and colours of individual
pixels. It passes these observations to the next layer, which builds on them by recognising edges,
shadows and shapes. The next layer builds on this again, before finally recognising that the image is
showing a strawberry and assessing whether it is ripe or not.
What can machine-learning algorithms do? A surprising amount
Facebook’s AI laboratory has developed a machine-learning algorithm called Deep Face that
recognises human faces with a 97% accuracy rate. It does so by studying a person’s existing Facebook
pictures and identifying their unique facial characteristics (such as the distance between their eyes).
When a new picture is uploaded to Facebook, the algorithm automatically recognises the people in it
and invites you to tag them.
How neural networks work
Raw image
Recognise who
the person is
Input
layer
INPUT Hidden
layers
Output
layer
OUTPUT
A neural network is organised into layers. Information
from individual pixels causes neurons in the first layer to
pass signals to the second, which then passes its analysis
to the third. Each layer deals with increasingly abstract
concepts, such as edges, shadows and shapes, until the
output layer attempts to categorise the entire image.
How Facebook recognises your face
Advanced science and the future of government
© The Economist Intelligence Unit Limited 201616
Robots and Artificial Intelligence Genomic Medicine Biometrics
Make a
diagnosis
Input
layer
INPUT Hidden
layers
Output
layer
OUTPUT
Classify patterns and compare against evidence
Age
Gender
Symptoms
Smoking
Diet
Blood test
Urine test
Genomic data
Symptoms
Blood test
Urine test
Genomic dataa
Diet
Smoking
Gender
Age
aa
How to diagnose diseases
Sources: The Economist, EIU
The data that power algorithms do not need to be images. Algorithms can also make sense of
articles, video recordings, or even messy, “unstructured” data such as handwritten notes. Once an
algorithm has learned something, it can take an action, such as producing a written report explaining
the logic of its prediction, or sending instructions to a robot for which pieces of fruit to pick.
Taken as a whole, machine-learning algorithms can do many things – primarily tasks that are
routine, or can be “learned” by analysing historical data. Talk into your phone and a Google app can
instantly translate it into a foreign language. The results are imperfect, but improving, as algorithms
draw on ever-larger “translation memory” databases to understand what words mean in different
contexts. Netflix uses machine learning to “personalise” the homepage and movie recommendations
that users see. Algorithms infer a user’s preferences based on their past interactions on the site (such
as watching, scrolling, pausing, and ranking); the interactions of similar users; and contextual factors
(time of day, device, location, etc.). They then predict the content that will be most receptive to the
user.
More surprisingly, machine learning is being applied to fields like writing and music composition.
While most people would not consider these to be “routine”, they are also based on data patterns
which can be learned and applied.
Advanced science and the future of government
© The Economist Intelligence Unit Limited 2016 17
Robots and Artificial Intelligence Genomic Medicine Biometrics
Machine learning in action
Source: EIU
Not just learning, but teaching itself and improving
Much of machine learning involves making predictions based on probability, but on a scale that a
human brain could never achieve. An algorithm does not “know” that a strawberry is ripe in the same
way that a human brain does. Rather, it predicts whether it is ripe according to its evaluation of data
and comparing this with past evidence.
Having labels for the training data (such as “ripe” and “rotten” for pictures of strawberries) makes
things easier for the algorithm, but is not a prerequisite. “Unsupervised algorithms” take vast amounts
of data that make little sense to a human. If they see enough repeated patterns they will make their
own classifications. For instance, an algorithm may analyse massive sets of genomic data belonging
to thousands of people and discover that certain gene mutations are associated with certain diseases
(see chapter 2). In this scenario, the algorithm is teaching itself.
Practitioners do not need to spend lifetimes crafting hugely complex algorithms. Rather, “genetic
algorithms” are often used. As their name implies, they use trial-and-error to mimic the way natural
selection works in the living world. With each run of the program, the highest-scoring algorithms are
retained as “parents”. These are then “bred” to create the next generation of algorithm. Those that
don’t work are discarded. Once they are in use, algorithms can improve themselves by analysing the
accuracy of their predictions and making tweaks accordingly (known as “reinforcement learning”).
Writing Quill is a platform that automates the writing of financial reports and sports
articles for outlets like Forbes.
Creating recipes
IBM's Watson analysed the Bon Appétit recipe database to recognise tasty food
pairings and created an app to suggest recipes based on the ingredients that a
person has available.
Financial advice
Wealthfront is an AI-powered financial advisor that assess a person's
characteristics (such as age and wealth), their objectives, and then uses
investing techniques to suggest what assets to invest in.
Music composition
Iamus is an algorithm that is fed with specific information, such as which
instruments should be used and what the desired duration should be. It then
creates its own orchestral compositions from scratch.
Video games
DeepMind developed a general learning algorithm that exceeded all human
players in popular video games, from Space Invaders to car racing games. It was
purchased by Google in 2014.
Advanced science and the future of government
© The Economist Intelligence Unit Limited 201618
Robots and Artificial Intelligence Genomic Medicine Biometrics
Survival of the fittest – How natural selection is applied to algorithms
Source: EIU
Robots and AI – Merged in symphony
Driverless cars (and other automated vehicles) are perhaps the best example of how robots and AI
can come together to awesome effect. The global positioning system (GPS) provides the robot (ie,
the car) with a huge set of mapping data, while a set of radars, sensors, and cameras provide data on
what is happening around it. Machine-learning
algorithms evaluate all of this data and, based
on what they have previously learned, issue
real-time instructions for steering, braking, and
accelerating.
The new era of driverless vehicles
Source: EIU
Driverless trucks Driverless cars Drone planes Drone ships
In May 2015, Daimler’s
18-wheeler Freightliner,
called the Inspiration
Truck, was unveiled.
Google has been working on
its self-driving car project
since 2009. It is currently
being tested in Austin and
California in the US.
DHL is using drones to deliver
medicine to Juist, a small
German island.
Rolls-Royce Holdings
launched a virtual-reality
prototype of a drone ship in
2014.
Driverless cars are perhaps the best
example of how robots and AI can come
together to awesome effect.
Randomly
generate initial
population of
algorithms
Evaluate the
fitness of each
algorithm
Does the
algorithm
meet the
objective?
Does the
algorithm
meet survival
criteria?
Combine/mutate
the survivors to
create next
generation of
algorithms
Yes
Yes Yes
Done Kill solution
No
Next generation
Advanced science and the future of government
© The Economist Intelligence Unit Limited 2016 19
Robots and Artificial Intelligence Genomic Medicine Biometrics
Other examples abound. Unlike harvesting corn, fruit picking still relies heavily on human hands. A
Spanish firm called Agrobot promises a robotic alternative. Its robot harvester is equipped with 14
arms for picking strawberries. Each arm has a camera that takes 20 pictures per second. Algorithms
analyse these images and assess the strawberries’ colour and shape against the desired level of
“ripeness”. If a strawberry is judged to be ripe, the robot’s arm positions its basket underneath it,
and a blade snips the stem. The whole process takes four seconds. Some human labour is needed to
supervise the robot, but much less than what is required to pick strawberries manually. The robot can
work night and day, and a new version, with 60 arms, is being trialled.5
The rise of robots and AI – Why now, and how far can it go?
The standard joke about robots and AI is that, like nuclear fusion, they have been the future for more
than half a century. Many techniques, like neural networks, date back to the 1950s. So why is today any
different? The main reason is that the underlying infrastructure powering robots and AI has changed
dramatically.
First, the processing power of computer chips has grown exponentially. People are often vaguely
familiar with Moore’s law – ie, the doubling every year of the number of transistors that can be put on
a microchip. However, its impact is rarely fully appreciated. The designers of the first artificial neural
networks in the 1960s had to rely on models with hundreds of transistor neurons. Today, those built by
Google and Facebook contain millions. This allows AI programs to operate at a speed that is hard for a
human to comprehend.
Second, AI systems run on data, and we live in a world that is deluged – from social media posts, to
the sensors that are now added to an array of machines and devices, to the vast archives of digitised
reports, laws, and books. In the past, even if such data were available, storing and accessing it would
have been cumbersome. Today, cloud computing means that much of it can be accessed from a laptop.
In 2011, IBM’s Watson was the size of a room. Now it is spread across servers in the cloud and can serve
customers across the world.
Finally, robots and AI are increasingly accessible to the world, rather than just to scientists. DIY
robot kits are much cheaper than industrial robots, and companies like EZ Robot even allow customers
to “print” robot components using 3D printers. In August 2015, Intel presented its “spiderbot” – a
spider-like robot constructed from 9,000 printed parts. A growing number of machine-learning
algorithms are free and open-source, as is the software on which many robots run (Robot Operating
System). This allows developers to quickly build on each other’s work. IBM has also made Watson
available to developers, with the aim of unleashing a new ecosystem of Watson-powered apps – like
those found in Apple’s iTunes store.
How far can robots and AI develop?
Robots and AI already offer the potential to automate, and possess five key human capabilities:
movement, dexterity, sensing, reasoning and acting.
Advanced science and the future of government
© The Economist Intelligence Unit Limited 201620
Robots and Artificial Intelligence Genomic Medicine Biometrics
How robots and AI emulate human capabilities
Source: EIU
How far can they expand beyond this? A famous test developed by Alan Turing is the “imitation
game”. In it, an individual converses with two entities in separate rooms: one is a human and one
is an AI-powered machine. If the individual is unable to identify which is which, the machine wins.
Every year, the Loebner Prize is offered to any AI program that can successfully trick a panel of human
experts in this way. To date, none has come close – although other competitions, with shorter test
times, have claimed (much disputed) victories.
The goal of the Turing test is to achieve what is known as “broad AI” – ie, AI that can do all of the
things that the human brain can do, rather than just one or two narrow tasks. There are huge debates
among scientists about whether broad AI will be achievable and, if so, when. One challenge is that
much of how the human brain works remains a mystery, although projects such as the BRAIN Initiative
in the US and the Blue Brain Project in Switzerland, which aim to build biologically detailed digital
reconstructions of the human brain, aim to address this.
A survey of leading scientists carried out by philosopher Nick Bostrom in 2013 found that most
believed that there was a 50% chance of developing broad AI by 2040-50, and a 90% chance by 2075.6
If broad AI is achieved, some believe that it would then continue to self-improve, ushering in an era of
“super intelligence” and a phenomenon known as the “technological singularity” (see below).
Source: EIU
MOVEMENT
Being able to get from
place to place.
Robots move in many ways. Hexapods walk
on six legs like an insect. Snakebots slither
and can change the shape of their body.
Wheelbots roll on wheels.
Today's robots boast impressive dexterity.
They can fold laundry, remove a nail from
a piece of wood, and screw a cap on a bottle.
Taking in data about the world,
or about a problem.
Computer vision can understand
moving images, chemical sensors
can recognise smells, sonar
sensors can recognise sounds,
and taste sensors can recognise
flavours.
Thinking about what a new set
of data means.
Machine learning analyses
data to identify patterns or
relationships. It can be used to
understand speech, images,
and natural language. It can assess
new data against past evidence
and make predictions or
recommendations.
Acting on what you have
discovered.
Natural language and speech
generation can be used to
document findings. Findings
can also be given to a robot,
as instructions for how to act.
DEXTERITY
Using one's hands to carry
out various tasks.
Human
capabilities
How AI
does it
SENSING REASONING ACTING
How robots
do it
Human
capabilities
Advanced science and the future of government
© The Economist Intelligence Unit Limited 2016 21
Robots and Artificial Intelligence Genomic Medicine Biometrics
If AI can reach a level where it matches the full
breadth of human intelligence, some futurists
argue that its ability to self-improve, backed by
ever-increasing computing power, will lead to an
“intelligence explosion” and the rise of “super
intelligence”. In such a scenario, machines would
design ever-smarter machines, all of which would
be beyond the understanding, or control, of even
the smartest human. The resulting situation – the
technological singularity – would be unpredictable
and unfathomable to human intelligence. Some
dream of a new utopia, while others worry
that super-intelligent machines may not have
humanity’s best interests at heart.
The technological singularity’s most famous
proponent is Ray Kurzweil, who predicts that it will
occur around 2045. Kurzweil argues that humans
will merge with the machines of the future, for
instance through brain implants, in order to keep
pace. Some “singularians” argue that super-
intelligent machines will tap into enhancements
in genomics and nanotechnology to carry out
mind-boggling activities. For instance, “nanobots”
– robots that work at the level of atoms or molecules
– could create any physical object (such as a car or
food) in an instant. Immortality could be achieved
through new artificial organs or by uploading your
mind into a robot.7
Perhaps not surprisingly, the technological
singularity has been dismissed by critics and
likened to a religious cult.8
However, it continues to
be debated, largely because of the achievements of
those advocating it. A serial inventor and futurist,
Kurzweil made 147 predictions in 1990 of what
would happen before 2009. These ranged from the
digitisation of music, movies, and books to the
integration of computers into eyeglasses. 86% of
the predictions later proved to be correct.9
In 2012
he was hired by Google as its head of engineering.
He also launched the Singularity University in
Silicon Valley, which is sponsored by Google and
Cisco, among others.
Discussions about the technological singularity generate both fascination and derision. It would
be unwise to dismiss it completely. While the human brain is complex, there is nothing supernatural
about it – and this implies that building something similar inside a machine could, in principle, be
possible. However, it is crucial to note that the vast majority of today’s AI work does not aspire to be
Artificial Narrow Intelligence (ANI)
Equals or exceeds human
intelligence, but in narrow areas
only, such as language translation,
spam filters, and Netflix recommen-
dations. Already in place and
improving quickly.
Artificial Broad Intelligence (ABI)
Can perform the full range of
intellectual tasks that a human
can. No credible examples exist to
date. Expert predictions range
from 2030 to 2100 to never.
Artificial Super Intelligence (ASI)
Much smarter than the best
human brains in every field,
including scientific creativity,
general wisdom and social skills.
Assuming AGI is achieved, expert
predictions suggest ASI will
happen less than 30 years later.
2000 2075? 2100?
Narrow AI v broad AI v super AI
Case study: What is the technological
singularity?
Source: EIU
Advanced science and the future of government
© The Economist Intelligence Unit Limited 201622
Robots and Artificial Intelligence Genomic Medicine Biometrics
“broad” or “super”. Rather it is “narrow” and fully focused on mastering individual tasks – especially
those that are repetitive or based on patterns. Despite this apparent limitation, even narrow AI covers
considerable ground.
How will robots and AI affect government?
There is much concern in policy circles about robots and AI. First there is the fear that they will destroy
jobs. Such worries were fuelled in 2013 when a study by academics at Oxford University predicted that
47% of jobs were at risk of replacement by 2030.10
Notably, many “safe” middle-class professions,
requiring considerable training, such as radiographers, accountants, judges, and pilots, appear to be
at risk. Other jobs appear less at risk – for the moment – particularly those which are highly creative,
unpredictable, or involve dealing with children, people who are ill, or people with special needs.
Jobs at risk of automation from robots and AI
Sources: Oxford University, EIU
The second fear concerns security threats. These gained traction in January 2015, when a group
of prominent thinkers, including Stephen Hawking and Elon Musk, signed an open letter calling for
0 20 40 60 80 100
Social workers
Dentists
High school teachers
Chief executives
Fitness trainers
Electrical engineers
Software developers
Detectives
Judges
Economists
Historians
Computer
programmers
Pilots
Real estate agents
Paralegals
Jewellers
Fashion models
Loan officers
Telemarketers High
risk of
automation
Lower
risk of
automation
Advanced science and the future of government
© The Economist Intelligence Unit Limited 2016 23
Robots and Artificial Intelligence Genomic Medicine Biometrics
responsible oversight of AI to ensure that research focuses on “societal benefit”, rather than simply
enhancing capabilities.11
Of particular concern is the risk posed by lethal autonomous weapons systems
(LAWS). LAWS are different from the remotely piloted drones that are already used in warfare: drones’
targeting decisions are made by humans, whereas LAWS can select and engage targets without any
human intervention. According to computer science professor Stuart Russell, they could include armed
quadcopters that can seek and eliminate enemy combatants in a city.12
Often described as the third
revolution in warfare, after gunpowder and nuclear arms, the first generation of LAWS are believed
by experts interviewed by the Economist Intelligence Unit to being close to complete. The remaining
barriers are legal, ethical, and political, rather than technical.
Fears about jobs and security are worthy of government attention. Crucially, however, robots and AI
also have the potential to greatly enhance the work of government. These improvements are possible
today and some government agencies have already started trials. The benefits will come in three main
forms and, in theory, could apply to almost all areas of a government’s work.
How will robots and AI benefit governments?
Source: EIU
Automation: Robots and AI can automate and enhance some government work. Such automation
will not necessarily spell the end for the employee in question. Rather, it could free up their time to
do more valuable and interesting tasks. It could also eliminate the need for humans to undertake
dangerous work such as defusing bombs.
Personalisation: In the same way that AI powers Netflix recommendations for subscribers, it could
also power a new generation of personalised government services and interactions – from personalised
3
key
benefits
Automatio
n Per
sonalisation
Prediction and
prevention
Administration
Health 
social care
Education Justice 
policing
Transactional
services
Transport 
emergency
response
Advanced science and the future of government
© The Economist Intelligence Unit Limited 201624
Robots and Artificial Intelligence Genomic Medicine Biometrics
treatment plans for patients, to personalised learning programmes for students and personalised
parole sentences for prisoners.
Prediction and prevention: One of AI’s main uses is making predictions based on what it has learned.
In certain situations, such predictions could allow governments to intervene and prevent problems
from occurring. This could dramatically enhance how police services and courts work, as well as
support the strategies of urban planners.
1. Education: Automated exam grading, adaptive learning platforms, and robot kits
Marking exams is repetitive and time-pressured – a combination that can allow mistakes to creep in.
As most exams are graded on specific criteria, and past examples are available, it seems a sensible
candidate for AI. In 2012, a study carried out by researchers at the University of Akron in Ohio tasked
an AI-powered “e-rater” with assessing 22,000 English literature essays.13
The grades awarded were
strikingly similar to those given by human evaluators. The main difference was the speed of the activity
- the e-rater was able assess 16,000 essays in 20 seconds. It can also provide explanations for its
marking – including comments on grammar and syntax.
Unsurprisingly, the technology is not welcomed by all. In the US, the National Council of Teachers
of English has campaigned against it, claiming that it misses subtlety and rewards writing that is
geared solely towards test results (although this is arguably true of any criteria-based assessment).
Critics have also demonstrated how the algorithms can be “tricked” into awarding high marks without
actually writing a good essay. Despite the controversy, the role of e-raters looks set to grow – either
as a check on teachers’ grading, or working under teachers’ supervision. Australia’s Curriculum,
Assessment and Reporting Authority (ACARA) recently announced that e-raters will mark the country’s
national assessment programme for literacy and numeracy by 2017.
Less controversial are personalised education (or “adaptive learning”) programmes. As with
healthcare, a great deal of today’s education is delivered on a one-size-fits-all basis – with most
students using the same textbooks and doing the same homework. Teachers often have little option
but to “teach to the middle” – resulting in advanced students becoming bored and struggling students
falling behind.14
A company called Knewton has developed a platform that tracks students as they
complete online classes in maths, biology, and English, and attempt multiple-choice questions. It
assesses how each student performs and compares this with other students’ past records. It then
decides what problem or piece of content to show next. Pearson, the world’s largest education
company, has partnered with Knewton to deliver a similar service for college students called MyLab 
Mastering, which is used by more than 11m students worldwide every year.15
As with most AI-led solutions, there is a degree of hype about such platforms, and their accuracy
and usefulness will depend on scale – the more users they have, the more accurate they will become.
However, several platforms have merits, particularly in subjects such as maths that require students to
master theories through repeated examples. More
experimental systems can recognise a student’s
emotional state (such as tiredness or boredom)
Humanoid teachers are unlikely to enter
classrooms in the near future, but robot
teaching kits are increasingly common.
Advanced science and the future of government
© The Economist Intelligence Unit Limited 2016 25
Robots and Artificial Intelligence Genomic Medicine Biometrics
and provide motivational advice to help them persevere.
Given the complexity involved in interacting with children, humanoid robotic teachers are unlikely
to take over classrooms in the near future. However, “robot kits”, such as the Lego Mindstorms series,
are increasingly used in science, technology, engineering and maths (STEM) classes. Students are
given the kits and asked to construct a robot and program it to carry out tasks. Evidence suggests that
they can provide a more effective way of teaching various maths and engineering concepts – such as
equations.16
In contrast to some traditional STEM teaching methods, they also help to build teamwork
and problem-solving capabilities – key “21st-century skills” that schools are trying to nurture.17
2. Health  social care: Personalised treatment, robot porters, and tackling ageing
Much of the excitement around AI has focused on its potential use in healthcare. In 2015, @Point of
Care, a firm based in New Jersey, trained IBM’s Watson to answer thousands of questions from doctors
and nurses on symptoms and treatments, based
on the most up-to-date peer-reviewed research.
In an interview with the Economist Intelligence
Unit, Sir Malcolm Grant, chairman of NHS
(National Health Service) England, claimed that
the combination of AI and patients’ genomic data
could allow “clinicians to make more efficient
use of expensive drugs, such as those used in
chemotherapy, by attuning them to tumour DNA
and then monitoring their effect through a course of treatment”.
While much of AI’s potential in healthcare is still at the trial stage, robots are already present in
hospitals. In 2015, the UK media reported excitedly about an NHS plan to introduce robot porters. The
machines, which look similar to those found in Amazon’s warehouses, will transport trolleys of food,
linen and medical supplies. In pharmacies, robot prescription systems are increasingly common. At the
University of California San Francisco Medical Center, a doctor produces an electronic prescription and
passes it to a robot arm that moves along shelves picking out the medicine needed. The pills are sorted
and dispensed into packets for patients. Under the system, the error rate has fallen from 2.8% to 0%.18
Surgical robots are used in a growing number of operations, including coronary bypasses, hip
replacements, and gynaecological surgeries. In the US, they carry out the majority of prostate cancer
operations (radical prostatectomies).19
In certain operations, they offer greater precision and reduced
scarring, and can reduce blood loss.20
However, they are also expensive and must remain under the
close control of trained surgeons at all times.
“The combination of AI and patients’
genomic data could allow clinicians to
make more efficient use of expensive
drugs.”
- Sir Malcolm Grant, chairman of NHS
England.
Advanced science and the future of government
© The Economist Intelligence Unit Limited 201626
Robots and Artificial Intelligence Genomic Medicine Biometrics
Reduced
blood loss
Strengths
Limitations
Improved
recovery times
Less
scarring
Increased
precision
Not suitable for
all surgery types
Significant
training required
High
fixed costs
Bulkiness
of equipment
The pros and cons of surgical robots
Source: EIU
Robots have been touted as a way to address the global ageing phenomenon. Over the next 20 years,
the number of people aged 65 and over will almost double to 1.1bn.21
Diseases such as dementia are
set to become more prevalent, while the labour force – whose taxes pay for treatments – will shrink.
Dementia patients often suffer from a lack of social engagement, which can magnify feelings of
loneliness, leading to depression and cognitive decline.22
PARO, a socially assistive robot (SAR), looks
like a baby seal and engages patients by acting like a pet. It can recognise when a person is calling its
name and can learn from repeated behaviour – if a patient scratches its neck after picking it up off the
floor, it will look for a scratch every time it gets picked up.23
PARO is approved by the US Food and Drug
Administration as a therapeutic device, and a 2013 trial found that it had a moderate-to-large effect
on patients’ quality of life.24
What type of medical robots might we see in the future? Scientists are excited about the potential of
“micro-bots”. These tiny robots would move inside patients’ bodies – helping to deliver drugs, address
trouble spots (such as a fluid build-up), or repair organs. Researchers are also working on robots that
are soft and resemble body tissue. In the US, researchers at MIT have developed prototype “squishy
robots” that can switch between hard and soft states and could, in theory, move through the body
without damaging organs.
3. Justice  security: Online dispute resolution and predictive policing
AI is already in play in the legal world. Courts use automatic speech recognition to dictate court records
outlining who said what during a trial. Judges and lawyers use apps like ROSS Intelligence, built on
Advanced science and the future of government
© The Economist Intelligence Unit Limited 2016 27
Robots and Artificial Intelligence Genomic Medicine Biometrics
IBM’s Watson platform, to post questions such as “Is a bankrupt company allowed to do business?”.
The app delivers instant answers, complete with citations and useful references to legislation or case
law.
In time, AI could usher in a new generation of automated online courts, particularly for the small
civil disputes that often clog up judicial systems. Canada is launching an online tribunal for small civil
disputes that will allow claimants to negotiate with the other party and, failing that, face an online
adjudication (run by humans). AI could enhance the system by “predicting” the outcome of a dispute
before claimants begin. An algorithm has already been developed that can predict the results of more
than 7,000 US Supreme Court cases with more than 70% accuracy, using only data that was available
before the case.25
If embedded into an online court, such predictive algorithms could encourage
claimants to drop their claim (if ill-advised) or encourage the other party to settle. They could also be
used by governments to channel legal aid more effectively, by identifying those who have a worthy
case but no financial means of pursuing it.
Police and security services are also using AI.
Facial-recognition algorithms have been closely
studied, and there is excitement over recent
enhancements that allow them to recognise
somebody even when their face is obscured. As
revealed by Edward Snowden, the US National Security Agency also uses voice recognition software to
convert phone calls into text in order to make the contents easier to search.26
Police are especially interested in using AI for “predictive policing”. As Tom Davenport, a professor
at Babson College, put it, “Why should the police only show up after the crime has been committed?”.
US firm PredPol analyses a feed of data on location, place and time of crimes to predict “hotspots”
(areas of 500 feet squared) where crime is likely to happen within the next 12 hours. A study published
in October 2015 found that the algorithm was able to predict 4.7% of crimes in Los Angeles, compared
with 2.1% for experienced analysts. It concluded that deploying extra police in hotspot areas would
save the Los Angeles Police Department US$9m per year.27
In Germany, researchers at the Institute for Pattern-Based Prediction Techniques have developed
an algorithm for predicting burglaries based on the “near repeat” concept – ie, in an area where a
burglary happens, repeated offences can be expected nearby within a short time frame. The algorithm
predicts burglaries within a radius of about 250 metres, and a time window of between 24 hours and
seven days. The institute claims that in the 18 months since its implementation in certain trial cities,
arrests have doubled thanks to additional patrolling, and the number of burglaries has fallen by as
much as 30%.28
However, such approaches do raise ethical questions. For instance, if algorithms
suggest that a crime is more likely in areas populated by certain ethnic groups, should police carry out
more intensive patrols, or this a new form of racial discrimination?
Newer algorithms are looking beyond past crime data to inform their predictions. In 2015,
researchers at the University of Virginia examined how Twitter posts could be assessed to predict crime
(although the legal environment for this activity is hazy in many countries). Their algorithm also drew
“Why should the police only show up after
the crime has been committed?”-
Professor Tom Davenport, Babson College.
Advanced science and the future of government
© The Economist Intelligence Unit Limited 201628
Robots and Artificial Intelligence Genomic Medicine Biometrics
Predicting the likelihood of somebody re-offending
Predicting crime hotspots
Predicting the outcome of a court case
Decisions on parole and length of prison sentences
Assigning extra police cover in risky areas
Stronger incentives to mediate early and avoid court
on weather forecasts – different types of extreme weather conditions have been shown to lead to
spikes in crime.29
The researchers claim that its accuracy is greater than that of models that use only
historical crime data.30
Predictive policing can also come in other guises. In the wake of the marathon bombing of 2013, the
city of Boston trialled predictive surveillance cameras. The AISight (pronounced “eyesight”) platform,
also in use in Chicago and Washington DC, starts by learning when a surveillance camera is showing
“typical behaviour”, such as somebody walking normally along a street. It then learns “untypical
behaviour” that is associated with crimes, such as unusual loitering or a flurry of movement that may
indicate that a fight is breaking out. It can then monitor surveillance cameras for abnormal behaviour,
and send alerts to authorities when it spots something. Unsurprisingly, the system has aroused privacy
concerns, although supporters argue that it is less damaging than more discriminatory attempts to
prevent crime, such as stop-and-search interrogations based on racial profiling.
Justice and policing – The benefits of prediction
Source: EIU
4. Administration: Automating visa processing, patent applications, and fraud
detection
A significant amount of day-to-day government
bureaucracy is routine and based on deciding
whether a person qualifies for something – be
it a pension top-up or a visa. Much like marking
exam papers, civil servants must apply several
rules to each case, which can lead to backlogs
and mistakes creeping in. If the qualification
guidelines for such processes are clear, AI
can speed up processing, according to Andy Chun, associate professor of computer science at City
University in Hong Kong.
Chun worked on an algorithm for the Hong Kong government to process immigration, passport
and visa applications. With millions of forms received yearly, the immigration office had previously
struggled to meet demand. In an interview with the Economist Intelligence Unit, Chun explained that
“If the qualification guidelines for
pensions or visas are clear, AI can speed up
processing”
- Andy Chun, Associate Professor of
Computer Science at City University in
Hong Kong.
Advanced science and the future of government
© The Economist Intelligence Unit Limited 2016 29
Robots and Artificial Intelligence Genomic Medicine Biometrics
the algorithm approves some applications, rejects others, and classifies the remainder as “grey areas”
where human judgement is needed. In these cases, the algorithm absorbs the choices that humans
make to allow for future automation.
IP Australia, the country’s intellectual property agency, is automating patent searches – the process
by which a proposed invention is examined against existing inventions. A similar approach could be
used to detect tax fraud. Today, governments rely on forensic accountants and lawyers wading through
mountains of paperwork, such as annual business filings, to detect possible cases of tax fraud. An
MIT researcher, Jacob Rosen, has explored an AI-led alternative. Rosen and his colleagues trained an
algorithm to recognise specific combinations of transactions and company partnership structures that
were often used in a specific tax dodge and unleashed it on new data.31
5. Transactional services: Personal assistants and “helperbots”
As explained above, when citizens apply for something – be it a new passport or registering ownership
of a property – AI can help governments automate the approval process. However, AI can also help
to enhance the experience of the citizen. In recent years, governments have tried to move these
transactional services online, but uptake is often low. For instance, in the UK more than 50% of vehicle
tax payments are still sent by post, even though 75% of British drivers buy their car insurance online.
This carries a heavy cost. The UK’s Cabinet Office estimated that a digital transaction can be up to 20
times cheaper than a telephone transaction, 30 times cheaper than a postal transaction, and 50 times
cheaper than a face-to-face transaction.32
The benefits of digitisation – Cost of delivering services in the UK
Sources: UK Cabinet Office, EIU
One way to improve uptake is to personalise digital services. Rather than offering pages of densely
written “frequently asked questions”, Singapore’s government is piloting IBM’s Watson as a “virtual
assistant”, much like Apple’s Siri. It will allow a citizen to tell Watson, in natural language, exactly
Service Average cost per transaction
Digital take-up
among users
1. Customs transactions £0.199.9%
2. Trade mark renewals £3.066.0%
3. Driving licence renewals £10.330.0%
4. Outpatient appointments £36.412.6%
5. Income support claims £137.00.0%
Advanced science and the future of government
© The Economist Intelligence Unit Limited 201630
Robots and Artificial Intelligence Genomic Medicine Biometrics
what it wants to do. Watson will prompt them
for more details, search through thousands
of potential answers, and return the most
appropriate one. According to Paul Macmillan,
Deloitte’s public-sector industry leader based
in Canada, such personal assistants could
understand and answer everything from “Am
I eligible for a pension or benefit?” to “How do I get a driver’s licence?”. The system can constantly
improve its answer quality by asking the citizen if they thought their issue had been resolved.
This new breed of digital service does carry the risk of exacerbating the digital divide in societies
by alienating those who are not able to use digital services. In the short term, governments have
responded by setting up “digital kiosks” where humans assist users to carry out the service in question.
However, an alternative approach is to embed the virtual assistants offered by Watson into a helper
robot that could scan people’s details and physical documents.
Such “helperbots” are already being tested in the private sector. As Gerald Wang, program
manager for Asia-Pacific at IDC Government Insight, pointed out, Henn na, a Japanese hotel, has used
helperbots to automate its check-in process. The robots also store luggage and check room cleanliness.
The process is not seamless, however. Visitors have claimed that helperbots struggle when dealing with
unexpected obstacles, such as visitors forgetting their passports. However, according to Mr Wang,
their introduction “shows what can be achieved”.
6. Transport and emergencies: Moderating the impact of urbanisation
In 1950, only 30% of the world’s population lived in urban areas. By 2030, this will hit 60%, with
almost 10% living in “megacities” of 10m or more people.33
Many urban transport systems are already
creaking and require regular maintenance. In Hong Kong, AI algorithms are used to schedule the 2,600
subway repair jobs that take place every week. They do so by identifying opportunities to combine
different repairs and evaluating criteria, such as local noise regulations. Today, the subway enjoys a
99.9% on-time record – far ahead of London or New York.34
The repair work is still carried out by humans. In time, robots could play a greater role. In the UK,
the University of Leeds recently won a £4.2m (US$6.4m) grant to help create “self-repairing cities”,
where small robots identify and repair everything from potholes to streetlights and utility pipes.35
Despite the eye-catching name, in the short term robots are likely be more useful for monitoring and
assessing infrastructure rather than repairing it, given the advanced dexterity that the latter often
requires.36
Urbanisation also risks exacerbating pollution, as China has borne witness to. According to a recent
study, air pollution contributes to 1.6m deaths in China every year – one-sixth of all deaths in the
country.37
On a given day, the severity of pollution depends on various factors including temperature,
wind speed, traffic, the operations of factories, and the previous day’s air quality. In August 2015,
IBM China revealed that it is working with Chinese government agencies on a programme to predict
“New personal assistants could answer
everything from ‘Am I eligible for a pension
or benefit?’ to ‘How do I get a driver’s
licence?’
- Paul Macmillan, Deloitte.
Advanced science and the future of government
© The Economist Intelligence Unit Limited 2016 31
Robots and Artificial Intelligence Genomic Medicine Biometrics
the severity of air pollution 72 hours in advance. It claims that its predictions are 30% more precise
than those derived through conventional approaches.38
The goal now is to expand the length of the
predictions, giving the authorities more time to intervene – for instance, by restricting or diverting
traffic, or even temporarily closing factories.
AI can help governments create better urban-planning strategies. Using software developed by the
US Defense Advanced Research Projects Agency (DARPA), Singapore is analysing huge masses of data,
such as anonymised geolocation data from mobile phones, to help urban planners identify crowded
areas, popular routes, and lunch spots, and to then use this information to make recommendations
about where to build new schools, hospitals, cycle lanes and bus routes.39
Disaster management is another urban-planning application. In the US, the state of California is
trialling AI technology developed by start-up One Concern which can predict what areas of a town
are likely to be worst affected by an earthquake. The system uses data on the age and construction
materials of buildings. When the early signs of an earthquake are identified, it combines this with
seismic data, so that emergency resources can be targeted. Robots will increasingly work alongside
humans to carry out such emergency efforts. In Japan, following the Fukushima nuclear power
plant explosion of 2011, drones used infra-red sensors to survey and gather data from locations too
dangerous for humans.
How should governments respond?
For governments, this development of robots and AI holds significant promise, but also raises
challenges that need to be managed. This requires a multi-faceted response.
1. Invest in trials and manage accountability
All government agencies should be asking how robots and AI – and the automation, personalisation
and prediction that they offer – could enhance their work. In many cases, applications will build on
what has been happening for years. For instance, police forces have long monitored areas following a
robbery to prevent future incidents. Predictive policing algorithms are a logical, more sophisticated,
extension of this.
Expectations must be kept in check. Most trials will need close involvement from human staff
initially, and will work best in narrow, tightly defined, areas (such as trying to predict burglaries rather
than all types of crime). Moreover, “predicting” should not be confused with “solving”. Predicting
crime and intervening to stop it does nothing to address its root causes. Predicting who is most likely
to develop cancer is certainly valuable, but potential sufferers will still need to address difficult
questions about how to prevent, or treat, the disease (see chapter 2).
It is also critical that accountability is not automated when a task is. “Black box” algorithms whose
rationale or logic are not understood will not be accepted by key stakeholders. Some AI suppliers have
recognised this. IBM’s new “Watson Paths” service provides doctors and medical practitioners with a
step-by-step explanation of how it reached its conclusions.
Advanced science and the future of government
© The Economist Intelligence Unit Limited 201632
Robots and Artificial Intelligence Genomic Medicine Biometrics
2. Support research and debate on ethical challenges
Robots and AI give rise to difficult ethical decisions. For instance, how does AI interpreting data from
surveillance cameras affect privacy rights? What if a driverless car’s efforts to save its own passenger
risks causing a pile-up with the vehicles behind it? If a robot is programmed to remind people to take
medicine, how should it proceed if a patient refuses? Allowing them to skip a dose could cause harm,
but insisting would impinge on their autonomy.
Such debates have given rise to a new field, “machine ethics”, which aims to give machines
the ability to make appropriate choices – in other words, to tell right from wrong. In many cases
philosophers work alongside computer scientists. In January 2015, the Future of Life Institute, set up
by Jaan Tallinn (co-founder of Skype) and Max Tegmark (MIT professor), among others, to mitigate
the existential risks facing humanity, published a set of research priorities to guide future AI work.40
AI companies have also set up ethics boards to guide their work, while government-backed research
institutes, including the US Office of Naval Research and the UK government’s engineering-funding
council, are evaluating the subject.
Despite what some media might report, the main concern is not that future AI might be “evil”, or
have any sentience whatsoever (ie, the ability to feel). Rather, the concern is that advanced AI may
pursue its narrow objectives, even positive ones (such as passenger safety or patient health), in such a
way that is misaligned with the wider objectives of humanity.41
To explain the point, a stark “paperclip”
example is often used. In this scenario, an AI is tasked with maximising the production of paperclips
at a factory. As the AI becomes more advanced, it proceeds to convert growing swathes of the earth’s
materials, and later those of the universe, into a massive number of paperclips. Although the example
is simplified, it explains a broader concern – that the narrow goals of AI become out of sync, or
“misaligned”, with the broader interests of humanity.
3. Foster new thinking about the jobs challenge
Trying to predict the impact of robots and AI on jobs is difficult. Participants on both sides make valid
points. Positive commentators argue that past technological improvements – from the industrial
revolution to the rise of the Internet – have always led to increased productivity and new types of jobs.
This, in turn, has made most of society better off (even if individual groups, such as farmers or miners,
have suffered).
However, critics retort that past technological developments are a poor guide because robots and
AI have the potential to replace a far wider set of jobs, including many skilled professions in fields as
diverse as healthcare, law, and administration. Furthermore, the new technology-based firms that
emerge are unlikely to be “job-heavy”. Google and Facebook employ a fraction of the staff that more
traditional firms of similar sizes employ, such as General Motors or Wal-Mart.
As a result, the rise of robots and AI has the potential to exacerbate two challenges facing
governments: widening inequality and long-term unemployment. While productivity in a country may
increase, the benefits may accrue to a narrow pool of investors, rather than to employees. In response,
commentators such as Martin Ford, author of Rise of the Robots, have suggested a guaranteed income
Advanced science and the future of government
© The Economist Intelligence Unit Limited 2016 33
Robots and Artificial Intelligence Genomic Medicine Biometrics
for every citizen. This “digital dividend” would be recognition of the fact that much of the new
advances in robots and AI rely on research that was originally funded by governments. As Ford has
pointed out, such a policy would be politically challenging in many countries. An alternative approach
suggested by Jerry Kaplan, author of Humans Need Not Apply, is for governments to try and spread
firm ownership more broadly by reducing the corporate tax rate for firms with a significant number of
individual shareholders. This would allow individuals to benefit more from the robots and AI revolution.
4. Invest in education, but not the traditional sort
The stock response to any technical challenge is to invest in education to “future-proof” a country’s
population. However, no type of conventional high-school or university education can adequately
prepare students for a world where robots and AI are prominent. The speed of change is too great
and nobody can predict what skills will be needed in ten years’ time. Competency-based education
programmes hold more promise. They focus on teaching individual skills (or competences) and can be
completed at any stage in an employee’s career. They can also be quickly designed and rolled out in
response to companies’ ever-changing needs.
Udacity, an online-education firm, has teamed up with companies such as ATT to provide “nano-
degrees” – job-related qualifications that can be completed in six to 12 months for $200 per month.
Dev Bootcamp offers a nine-week course for code developers, paid for in part by a success fee. The firm
charges employers for each graduate hired, after they successfully complete 100 days on the job.42
To
fund such programmes, Jerry Kaplan has called for companies to offer “job mortgages”. Under this
system, workers would commit to undertaking ongoing training as part of their employment contract.
The cost of the programmes would be deducted from their future wages.43
5. Tackle the security issues at a global level
The development of LAWS – which could select and attack targets without human intervention – is
closer than most imagine. Indeed, there is a first-mover advantage in their development. Once they
become relatively straightforward to produce, military powers will struggle to avoid the temptation to
gain an advantage over their foes. Once one country is thought to have them, others are likely to follow
suit.
This has culminated in the “Campaign to Stop Killer Robots”, fronted by an alliance of human rights
groups and scientists. They call for a pre-emptive ban on developing and using LAWs, in the same way
that blinding laser weapons and unexploded cluster bombs were banned in the past. However, a full
ban is opposed by some countries, including the UK and the US, who have argued that existing law is
sufficient to prevent the use of LAWs.
Others have questioned whether LAWs are ethically worse than traditional weapons. If they more
accurately identify targets, while also meeting the traditional humanitarian rules of distinction,
proportionality, and military necessity, this could result in fewer unintended deaths than traditional
human-led warfare.
As with nuclear weapons, the best long-term solution is an international agreement with clear
Advanced science and the future of government
© The Economist Intelligence Unit Limited 201634
Robots and Artificial Intelligence Genomic Medicine Biometrics
provisions on what countries can do. The UN has already held a series of meetings on LAWS under the
auspices of the Convention on Certain Conventional Weapons in Geneva, Switzerland. The next week-
long meeting will be held in April 2016. However, controlling the development of LAWs is likely to prove
more difficult than nuclear weapons, as developing them in secret will be much easier and quicker.
Conclusion
Robots and AI have been heavily hyped in the popular media discourse, with advocates and sceptics
each presenting dramatic visions of utopian, or dystopian, futures. Yet in this polarised kind of debate,
many of the nuances and subtleties are lost.
On one hand, supporters tend to over-promise what their technologies can do, and often have
vested interests in these technologies. On the other hand, the underlying trends that make robots
and AI a reality are developing much faster than most people realise, and a tipping point in their
development has been reached.
The range of tasks that robots and AI will soon be able to undertake is also far beyond what many
people appreciate. This could dramatically enhance the work of governments – by automating and
personalising services, and by better predicting challenges before they arise. However, robots and AI
also pose risks to security, employment, and privacy, and raise knotty ethical challenges that require
wider debate. Governments are right to progress robots and AI trials, but must also give considered
thought to the challenges they bring.
Chapter 2: Genomic Medicine
Advanced science and the future of government
© The Economist Intelligence Unit Limited 201636
Robots and Artificial Intelligence Genomic Medicine Biometrics
Executive Summary
Genomic medicine uses an individual’s genome
– ie, their unique set of genes and DNA – to
personalise their healthcare treatment. Genomic
medicine’s advance has been boosted by two major
developments. First, new technology has made it
possible, and affordable, for anybody to quickly
understand their own genome. Second, new gene-
editing tools may allow practitioners to “find and
replace” the mutations within genes that give rise
to disorders.
Much of genomic medicine is relatively
straightforward. Rare disorders caused by
mutations in single genes are already being treated
through gene editing. In time, these disorders
may be eradicated altogether. For more common
disorders, such as cancer, the response is more
complex. However, patients’ genomic data could
lead to more sophisticated preventative measures,
better detection, and personalised treatments.
Other potential applications of genomic medicine
are mind-boggling. Researchers are exploring
whether gene editing could make animal organs
suitable for human transplant, and whether “gene
drives” in mosquito populations could help to
eradicate malaria.
The fast pace of development has led to
ethical concerns. Some worry that prospective
parents may try to edit desirable traits into their
embryos’ genes, to try and increase their baby’s
attractiveness or intelligence. This, critics argue,
is the fast route back to eugenics and governments
need to respond appropriately.
This chapter starts with an overview of what
exactly genomic medicine is and the recent
advances that have led to such excitement. It then
examines three key ways in which genomic medicine
could transform healthcare delivery. The chapter
concludes with suggestions for government leaders
on how to respond.
Key definitions
Biology terms
Organism
An organism is any living biological entity.
Examples include people, animals, plants, and
bacteria. Organisms are made up of cells.
Cells
The human body is composed of trillions of cells.
They are the smallest unit of life that can replicate
independently, and make up the tissue in organs
such as the brain, skin and lungs.
Chromosomes
Most human cells contain a set of 46 chromosomes,
which in turn contain most of that person’s DNA.
These 46 chromosomes come in 23 pairs. One
chromosome in each pair is inherited from the
mother, and one from the father.
DNA
DNA (or deoxyribonucleic acid) is mainly located
in chromosomes. It provides a set of instructions
for how a person will look, function, develop, and
reproduce. DNA is made up of sequences of four
chemical “blocks”, or bases: adenine (A), cytosine
(C), guanine (G) and thymine (T).
Advanced science and the future of government
© The Economist Intelligence Unit Limited 2016 37
Robots and Artificial Intelligence Genomic Medicine Biometrics
Genes
Genes are individual “sequences” of DNA that
determine the physical traits that a person inherits
and their propensity to develop certain diseases.
For example, the sequence ATCGTT might be an
instruction for blue eyes. Individuals inherit two
versions of each gene – one from each parent.
Proteins
Some genes instruct the body on how to make
different proteins. Proteins are necessary for an
organism to develop, survive and reproduce. For
instance, the BRCA1 gene is known as a “tumour
suppressor” because it can instruct a protein to
repair breast tissue.
Genome
A genome is an organism’s complete set of DNA.
For a human, it contains around three billion
DNA letters (or bases), and around 20,000 genes,
located on 46 chromosomes. Most cells in a person’s
body contain the same, unique genome.
Genome sequencing
Genome sequencing is undertaken in a laboratory,
and determines the complete DNA sequence of an
organism – ie, how the DNA “blocks”, or bases, are
ordered.
Genetic variation
99.9% of DNA is identical in all people in the world,
regardless of race, gender or size. However, the
remaining “genetic variation” explains some of
the common differences in appearance, disease
susceptibility, and other traits.
Gene mutations
Gene mutations occur when an individual’s gene
becomes different to that which is found in most
people. These mutations could be inherited
from one’s parents, or they could develop due to
environmental factors such as exposure to toxins.
Gene disorders
Up to 10,000 diseases, called monogenic disorders,
are caused by a mutation in a single gene. Other,
more complex disorders, such as most cancers, are
caused by a combination of multiple gene mutations
and external factors such as diet and exposure to
toxins.
Gene therapy
Gene therapy replaces a mutated gene with a
healthy copy, or introduces a new gene to help fight
a disease. A new technology, CRISPR-Cas9, allows
gene mutations to be “edited” and replaced with
“correct” genes. However, it has not yet been used
on humans.
Gene disorders
Cystic fibrosis
Cystic fibrosis is a life-threatening disease. A
mutated gene causes a thick build-up of mucus in
the lungs, pancreas, liver, kidneys, and intestines.
The build-up makes it hard to breathe and digest
food, and leads to frequent infections.
Familial hypercholesterolaemia
A condition that causes patients’ “bad cholesterol”
levels to be higher than normal, increasing the risk
of heart disease and heart attacks at an early age.
Haemophilia
Haemophilia is a group of disorders that can be life-
threatening. Mutated genes mean that a patient’s
blood does not clot properly, potentially leading to
excessive bleeding. Internal bleeding can damage
key organs and tissues.
Huntington’s disease
Huntington’s disease causes the progressive
breakdown of nerve cells in the brain. Over time
it increasingly affects a patient’s movement,
cognition and behaviour.
Advanced science and the future of government
© The Economist Intelligence Unit Limited 201638
Robots and Artificial Intelligence Genomic Medicine Biometrics
Mitochondrial diseases
Mitochondrial disease is a group of disorders that
are caused by mutations in mitochondrial DNA,
which converts food into energy. Symptoms include
loss of muscle coordination, learning disabilities,
heart disease, respiratory disorders, and dementia.
Muscular dystrophy
A group of conditions that gradually cause the
muscles to weaken, leading to an increasing level
of disability. One of the most common forms is
“Duchenne muscular dystrophy”; men with the
condition will usually live only into their 20s or 30s.
Maple syrup urine disease (MSUD)
A condition caused by a gene defect which prevents
the body from breaking down certain parts of
proteins, leading to a buildup of chemicals in the
blood. In the most severe form, MSUD can damage
the brain during times of physical stress (such as
infection, fever, or not eating for a long time).
Sickle cell disease
A group of disorders that cause the red blood cells
to become rigid and sickle-shaped, in contrast
to normal red blood cells, which are flexible and
disc-shaped. The abnormal cells can block blood
vessels, resulting in tissue and organ damage and
severe pain. One positive effect is that sufferers are
protected from malaria.
Severe combined immunodeficiency (SCID)
A group of potentially fatal disorders in which
a gene mutation results in patients being born
without a functioning immune system. This makes
them vulnerable to severe and recurrent infections.
Tay-Sachs disease
A fatal disorder that primarily occurs in children.
It occurs because a mutated gene can no longer
produce a specific enzyme, resulting in a fatty
substance building up in brain cells and nerve cells
which destroys the patient’s nervous system.
Thalassaemia
Thalassaemia is a group of disorders in which the
body makes an abnormal form of haemoglobin, the
protein in red blood cells that carries oxygen. If left
untreated, it can cause organ damage, liver disease,
and heart failure.
Source: EIU, Genetic Home Reference, NHS
Advanced science and the future of government
© The Economist Intelligence Unit Limited 2016 39
Robots and Artificial Intelligence Genomic Medicine Biometrics
Background
In May 2013 the actress Angelina Jolie underwent a much-publicised double mastectomy. She did so
after being informed that she has a faulty version of the BRCA1 gene, giving her an 87% chance of
developing breast cancer.1
A year later, researchers found that the number of women in the UK who had
undergone BRCA testing had doubled – a phenomenon now referred to as the “Angelina Jolie effect”.2
Although she may not have planned it, Ms Jolie gave a significant boost to the field of genomic
medicine – broadly defined as using an individual’s genomic information (or DNA) in their clinical care.
In recent years, the field has enjoyed landmark breakthroughs that could lead to a revolution in how
diseases are diagnosed, treated, prevented, and even eradicated.
What is genomic medicine?
A move away from one-size-fits-all
If a person feels unwell they typically visit a doctor, usually after their symptoms become prolonged
or pronounced. If they are lucky, they will make that visit in good time and the doctor will make a
diagnosis by checking their symptoms and asking questions about family history and lifestyle. For
more complex conditions, various tests and the involvement of specialists may be needed.
Once diagnosed, the patient will embark on a treatment plan. While the impact of a disease can feel
very personal, the treatment usually isn’t. Patients receive treatment based on nationally accepted
procedures and protocols. If the first treatment doesn’t work, a different approach is tried – an
iterative process that can turn into a race against time for more serious diseases.
Genomic medicine (sometimes referred to as personalised medicine or precision medicine) promises
a more nuanced approach. It takes as its premise the fact that each person’s biological make-up is
unique and so their diagnosis and treatment should be as well – using that individual’s unique genomic
information, or DNA. Its advance has been boosted by two major developments.
First, the human genome was successfully sequenced in 2001. This provided a set of “blueprints”
for how the human body works. It was followed by rapid technological advances over the next 14 years
that made it possible for anybody to quickly and
cheaply sequence their own genome. Second,
sophisticated “gene-editing” tools have been
developed that can potentially modify faulty
genes and help treat, or prevent, certain diseases.
The mapping of the human genome and the rise of “next-gen” sequencing
In 2001 the Human Genome Project (HGP), the world’s largest collaborative biological research
project, announced the first ever successful sequencing of the human genome. In terms of scale, this
has been likened to an “internal” voyage of human discovery on a par with the external voyage that
brought man to the moon.3
Genomic medicine has been boosted by the
huge fall in sequencing costs, and the rise
of new gene editing tools.
Advanced science and the future of government
© The Economist Intelligence Unit Limited 201640
Robots and Artificial Intelligence Genomic Medicine Biometrics
A person’s genome is their full set of DNA. It is located in most cells in their body and is packaged
into 23 pairs of chromosomes. One chromosome in each pair is inherited from the person’s mother and
one from the father. Each chromosome is made up of individual sequences of DNA, called genes.
Much like how letters in the alphabet are arranged to form words, four basic blocks (A, C, G and T) of
DNA are arranged to form genes. The HGP discovered that a person has approximately 20,500 genes –
far fewer than previously thought. It also identified these genes’ locations and how their basic blocks
are ordered.  
This is important because a person’s genes determine their traits – ie, how they look and function.
They also determine that person’s propensity to develop certain diseases. This is because genes
provide instructions to the body for how to make proteins. Proteins, in turn, carry out key functions
such as repairing cells or protecting against infections.
If the basic blocks in an individual gene become garbled or “mutated”, it may be unable to produce
the protein needed. For instance, the BRCA1 gene is known as a “tumour suppressor” because it can
repair mutated DNA in breast tissue. If it becomes damaged by a mutation (as in Angelina Jolie’s case),
the risk of breast cancer increases.
99.9% of the DNA in human genomes is identical across all people. However, the remaining 0.1%
(ie, a person’s gene variations) is of huge importance. Some of these gene variations, or mutations,
are inherited from our parents, while others are developed over time (due to smoking or exposure to
toxins, for example). Some mutations are harmless, while others are associated with diseases. The
impact of many remains unknown.
The HGP provided a vital starting point to the understanding of our genes. In the decade since
it concluded, scientists launched a set of sequencing projects to understand how genes (and gene
mutations) vary across different people. For instance, how do the genes of Alzheimer’s patients differ
from those who do not have the disease? How do the genes of patients that respond well to treatment
differ from those who do not?
Initiatives to sequence genomes around the world
Source: EIU
1990-2003 2005- 2008-2015 2013- 2013- 2013-
Human Genome
Project
An international research
collaboration to carry out
the first ever sequencing
of the human genome.
An international research project
that sequenced more than
2,500 genomes and
identified many rare variations.
The Harvard-led project aims to sequence
and publish the genomic data
of 100,000 volunteers.
A 4-year project led to sequence
100,000 genomes from UK NHS
patients with rare diseases and
cancers, and their families.
A project to sequence up to 500
individuals from Qatar, Bahrain,
Kuwait, UAE, Tunisia,
Lebanon, and KSA.
A 5-year project to analyse
more than 20,000 Saudi genomes
to better understand the
genetic basis of disease.
Personal Genome
Project
100,000 Genomes
Project
Saudi Human
Genome Program
Genome Arabia
1,000 Genomes
Project
Start date
Advanced_Science_future_of_gov_en (6)
Advanced_Science_future_of_gov_en (6)
Advanced_Science_future_of_gov_en (6)
Advanced_Science_future_of_gov_en (6)
Advanced_Science_future_of_gov_en (6)
Advanced_Science_future_of_gov_en (6)
Advanced_Science_future_of_gov_en (6)
Advanced_Science_future_of_gov_en (6)
Advanced_Science_future_of_gov_en (6)
Advanced_Science_future_of_gov_en (6)
Advanced_Science_future_of_gov_en (6)
Advanced_Science_future_of_gov_en (6)
Advanced_Science_future_of_gov_en (6)
Advanced_Science_future_of_gov_en (6)
Advanced_Science_future_of_gov_en (6)
Advanced_Science_future_of_gov_en (6)
Advanced_Science_future_of_gov_en (6)
Advanced_Science_future_of_gov_en (6)
Advanced_Science_future_of_gov_en (6)
Advanced_Science_future_of_gov_en (6)
Advanced_Science_future_of_gov_en (6)
Advanced_Science_future_of_gov_en (6)
Advanced_Science_future_of_gov_en (6)
Advanced_Science_future_of_gov_en (6)
Advanced_Science_future_of_gov_en (6)
Advanced_Science_future_of_gov_en (6)
Advanced_Science_future_of_gov_en (6)
Advanced_Science_future_of_gov_en (6)
Advanced_Science_future_of_gov_en (6)
Advanced_Science_future_of_gov_en (6)
Advanced_Science_future_of_gov_en (6)
Advanced_Science_future_of_gov_en (6)
Advanced_Science_future_of_gov_en (6)
Advanced_Science_future_of_gov_en (6)
Advanced_Science_future_of_gov_en (6)
Advanced_Science_future_of_gov_en (6)
Advanced_Science_future_of_gov_en (6)
Advanced_Science_future_of_gov_en (6)
Advanced_Science_future_of_gov_en (6)
Advanced_Science_future_of_gov_en (6)
Advanced_Science_future_of_gov_en (6)
Advanced_Science_future_of_gov_en (6)
Advanced_Science_future_of_gov_en (6)
Advanced_Science_future_of_gov_en (6)
Advanced_Science_future_of_gov_en (6)
Advanced_Science_future_of_gov_en (6)
Advanced_Science_future_of_gov_en (6)
Advanced_Science_future_of_gov_en (6)
Advanced_Science_future_of_gov_en (6)
Advanced_Science_future_of_gov_en (6)
Advanced_Science_future_of_gov_en (6)

More Related Content

What's hot

Digital Health 101 for Hospital Executives (October 4, 2021)
Digital Health 101 for Hospital Executives (October 4, 2021)Digital Health 101 for Hospital Executives (October 4, 2021)
Digital Health 101 for Hospital Executives (October 4, 2021)
Nawanan Theera-Ampornpunt
 
Medical Informatics: A Look From USA To Thailand (Paper)
Medical Informatics: A Look From USA To Thailand (Paper)Medical Informatics: A Look From USA To Thailand (Paper)
Medical Informatics: A Look From USA To Thailand (Paper)
Nawanan Theera-Ampornpunt
 
Ict in medicine
Ict in medicineIct in medicine
Ict in medicine
aixoo
 
Informatics in Emergency Medicine: A Brief Introduction (Paper)
Informatics in Emergency Medicine: A Brief Introduction (Paper)Informatics in Emergency Medicine: A Brief Introduction (Paper)
Informatics in Emergency Medicine: A Brief Introduction (Paper)
Nawanan Theera-Ampornpunt
 
Information Technology in Animal Health Care System
Information Technology in Animal Health Care SystemInformation Technology in Animal Health Care System
Information Technology in Animal Health Care System
IJCSIS Research Publications
 
422 3 smart_e-health_care_using_iot_and_machine_learning
422 3 smart_e-health_care_using_iot_and_machine_learning422 3 smart_e-health_care_using_iot_and_machine_learning
422 3 smart_e-health_care_using_iot_and_machine_learning
aissmsblogs
 
The Paths Toward Informatics Careers in the Post-HITECT Era
The Paths Toward Informatics Careers in the Post-HITECT EraThe Paths Toward Informatics Careers in the Post-HITECT Era
The Paths Toward Informatics Careers in the Post-HITECT Era
Nawanan Theera-Ampornpunt
 
Use of ICT in Healthcare
Use of ICT in HealthcareUse of ICT in Healthcare
Use of ICT in Healthcare
Nawanan Theera-Ampornpunt
 
Giris basics of biomedical informatics general
Giris basics of biomedical informatics generalGiris basics of biomedical informatics general
Giris basics of biomedical informatics general
Serkan Turkeli
 
Big data for health
Big data for healthBig data for health
Big data for health
redpel dot com
 
Digital Health Transformation: What's Next?
Digital Health Transformation: What's Next?Digital Health Transformation: What's Next?
Digital Health Transformation: What's Next?
Nawanan Theera-Ampornpunt
 
One Pathway to Thailand's eHealth: A Personal Quick Thought (February 14, 2016)
One Pathway to Thailand's eHealth: A Personal Quick Thought (February 14, 2016)One Pathway to Thailand's eHealth: A Personal Quick Thought (February 14, 2016)
One Pathway to Thailand's eHealth: A Personal Quick Thought (February 14, 2016)
Nawanan Theera-Ampornpunt
 
ICT in Healthcare
ICT in HealthcareICT in Healthcare
ICT in Healthcare
Nawanan Theera-Ampornpunt
 
Deep-learning-or-health-informatics-recent-trends-and-future-directions By Ra...
Deep-learning-or-health-informatics-recent-trends-and-future-directions By Ra...Deep-learning-or-health-informatics-recent-trends-and-future-directions By Ra...
Deep-learning-or-health-informatics-recent-trends-and-future-directions By Ra...
raihansikdar
 
Evolution 2013: Prof. Dr. Georges De Moor, EuroRec on Liberating Health Data ...
Evolution 2013: Prof. Dr. Georges De Moor, EuroRec on Liberating Health Data ...Evolution 2013: Prof. Dr. Georges De Moor, EuroRec on Liberating Health Data ...
Evolution 2013: Prof. Dr. Georges De Moor, EuroRec on Liberating Health Data ...
Life Sciences Network marcus evans
 
AI in the Covid-19 pandemic
AI in the Covid-19 pandemicAI in the Covid-19 pandemic
AI in the Covid-19 pandemic
Deakin University
 
Application of ICT for Clinical Care Improvement
Application of ICT for Clinical Care ImprovementApplication of ICT for Clinical Care Improvement
Application of ICT for Clinical Care Improvement
Nawanan Theera-Ampornpunt
 
Overview of Health Informatics (October 4, 2021)
Overview of Health Informatics (October 4, 2021)Overview of Health Informatics (October 4, 2021)
Overview of Health Informatics (October 4, 2021)
Nawanan Theera-Ampornpunt
 
Application of ICT for Clinical Care Improvement (February 13, 2016)
Application of ICT for Clinical Care Improvement (February 13, 2016)Application of ICT for Clinical Care Improvement (February 13, 2016)
Application of ICT for Clinical Care Improvement (February 13, 2016)
Nawanan Theera-Ampornpunt
 
Global or Glocal e-Health Approaches in Asia: What Is New or Next?
Global or Glocal e-Health Approaches in Asia: What Is New or Next?Global or Glocal e-Health Approaches in Asia: What Is New or Next?
Global or Glocal e-Health Approaches in Asia: What Is New or Next?
Nawanan Theera-Ampornpunt
 

What's hot (20)

Digital Health 101 for Hospital Executives (October 4, 2021)
Digital Health 101 for Hospital Executives (October 4, 2021)Digital Health 101 for Hospital Executives (October 4, 2021)
Digital Health 101 for Hospital Executives (October 4, 2021)
 
Medical Informatics: A Look From USA To Thailand (Paper)
Medical Informatics: A Look From USA To Thailand (Paper)Medical Informatics: A Look From USA To Thailand (Paper)
Medical Informatics: A Look From USA To Thailand (Paper)
 
Ict in medicine
Ict in medicineIct in medicine
Ict in medicine
 
Informatics in Emergency Medicine: A Brief Introduction (Paper)
Informatics in Emergency Medicine: A Brief Introduction (Paper)Informatics in Emergency Medicine: A Brief Introduction (Paper)
Informatics in Emergency Medicine: A Brief Introduction (Paper)
 
Information Technology in Animal Health Care System
Information Technology in Animal Health Care SystemInformation Technology in Animal Health Care System
Information Technology in Animal Health Care System
 
422 3 smart_e-health_care_using_iot_and_machine_learning
422 3 smart_e-health_care_using_iot_and_machine_learning422 3 smart_e-health_care_using_iot_and_machine_learning
422 3 smart_e-health_care_using_iot_and_machine_learning
 
The Paths Toward Informatics Careers in the Post-HITECT Era
The Paths Toward Informatics Careers in the Post-HITECT EraThe Paths Toward Informatics Careers in the Post-HITECT Era
The Paths Toward Informatics Careers in the Post-HITECT Era
 
Use of ICT in Healthcare
Use of ICT in HealthcareUse of ICT in Healthcare
Use of ICT in Healthcare
 
Giris basics of biomedical informatics general
Giris basics of biomedical informatics generalGiris basics of biomedical informatics general
Giris basics of biomedical informatics general
 
Big data for health
Big data for healthBig data for health
Big data for health
 
Digital Health Transformation: What's Next?
Digital Health Transformation: What's Next?Digital Health Transformation: What's Next?
Digital Health Transformation: What's Next?
 
One Pathway to Thailand's eHealth: A Personal Quick Thought (February 14, 2016)
One Pathway to Thailand's eHealth: A Personal Quick Thought (February 14, 2016)One Pathway to Thailand's eHealth: A Personal Quick Thought (February 14, 2016)
One Pathway to Thailand's eHealth: A Personal Quick Thought (February 14, 2016)
 
ICT in Healthcare
ICT in HealthcareICT in Healthcare
ICT in Healthcare
 
Deep-learning-or-health-informatics-recent-trends-and-future-directions By Ra...
Deep-learning-or-health-informatics-recent-trends-and-future-directions By Ra...Deep-learning-or-health-informatics-recent-trends-and-future-directions By Ra...
Deep-learning-or-health-informatics-recent-trends-and-future-directions By Ra...
 
Evolution 2013: Prof. Dr. Georges De Moor, EuroRec on Liberating Health Data ...
Evolution 2013: Prof. Dr. Georges De Moor, EuroRec on Liberating Health Data ...Evolution 2013: Prof. Dr. Georges De Moor, EuroRec on Liberating Health Data ...
Evolution 2013: Prof. Dr. Georges De Moor, EuroRec on Liberating Health Data ...
 
AI in the Covid-19 pandemic
AI in the Covid-19 pandemicAI in the Covid-19 pandemic
AI in the Covid-19 pandemic
 
Application of ICT for Clinical Care Improvement
Application of ICT for Clinical Care ImprovementApplication of ICT for Clinical Care Improvement
Application of ICT for Clinical Care Improvement
 
Overview of Health Informatics (October 4, 2021)
Overview of Health Informatics (October 4, 2021)Overview of Health Informatics (October 4, 2021)
Overview of Health Informatics (October 4, 2021)
 
Application of ICT for Clinical Care Improvement (February 13, 2016)
Application of ICT for Clinical Care Improvement (February 13, 2016)Application of ICT for Clinical Care Improvement (February 13, 2016)
Application of ICT for Clinical Care Improvement (February 13, 2016)
 
Global or Glocal e-Health Approaches in Asia: What Is New or Next?
Global or Glocal e-Health Approaches in Asia: What Is New or Next?Global or Glocal e-Health Approaches in Asia: What Is New or Next?
Global or Glocal e-Health Approaches in Asia: What Is New or Next?
 

Viewers also liked

2016 Dal Human Genetics - Genomics in Medicine Lecture
2016 Dal Human Genetics - Genomics in Medicine Lecture2016 Dal Human Genetics - Genomics in Medicine Lecture
2016 Dal Human Genetics - Genomics in Medicine Lecture
Dan Gaston
 
Towards Digitally Enabled Genomic Medicine: the Patient of The Future
Towards Digitally Enabled Genomic Medicine: the Patient of The FutureTowards Digitally Enabled Genomic Medicine: the Patient of The Future
Towards Digitally Enabled Genomic Medicine: the Patient of The Future
Larry Smarr
 
Integrating Genomic Medicine into Patient Care with Trish Brown, MS, CGC
Integrating Genomic Medicine into Patient Care with Trish Brown, MS, CGCIntegrating Genomic Medicine into Patient Care with Trish Brown, MS, CGC
Integrating Genomic Medicine into Patient Care with Trish Brown, MS, CGC
Asia Pacific Medical Technology Association (APACMed)
 
From Digitally Enabled Genomic Medicine to Personalized Healthcare
From Digitally Enabled Genomic Medicineto Personalized HealthcareFrom Digitally Enabled Genomic Medicineto Personalized Healthcare
From Digitally Enabled Genomic Medicine to Personalized Healthcare
Larry Smarr
 
Bioinformatics
BioinformaticsBioinformatics
Bioinformatics
JTADrexel
 
Genomic Medicine: Personalized Care for Just Pennies
Genomic Medicine: Personalized Care for Just PenniesGenomic Medicine: Personalized Care for Just Pennies
Genomic Medicine: Personalized Care for Just Pennies
Health Catalyst
 

Viewers also liked (6)

2016 Dal Human Genetics - Genomics in Medicine Lecture
2016 Dal Human Genetics - Genomics in Medicine Lecture2016 Dal Human Genetics - Genomics in Medicine Lecture
2016 Dal Human Genetics - Genomics in Medicine Lecture
 
Towards Digitally Enabled Genomic Medicine: the Patient of The Future
Towards Digitally Enabled Genomic Medicine: the Patient of The FutureTowards Digitally Enabled Genomic Medicine: the Patient of The Future
Towards Digitally Enabled Genomic Medicine: the Patient of The Future
 
Integrating Genomic Medicine into Patient Care with Trish Brown, MS, CGC
Integrating Genomic Medicine into Patient Care with Trish Brown, MS, CGCIntegrating Genomic Medicine into Patient Care with Trish Brown, MS, CGC
Integrating Genomic Medicine into Patient Care with Trish Brown, MS, CGC
 
From Digitally Enabled Genomic Medicine to Personalized Healthcare
From Digitally Enabled Genomic Medicineto Personalized HealthcareFrom Digitally Enabled Genomic Medicineto Personalized Healthcare
From Digitally Enabled Genomic Medicine to Personalized Healthcare
 
Bioinformatics
BioinformaticsBioinformatics
Bioinformatics
 
Genomic Medicine: Personalized Care for Just Pennies
Genomic Medicine: Personalized Care for Just PenniesGenomic Medicine: Personalized Care for Just Pennies
Genomic Medicine: Personalized Care for Just Pennies
 

Similar to Advanced_Science_future_of_gov_en (6)

Can we morally justify the replacement of humans by artificial intelligence i...
Can we morally justify the replacement of humans by artificial intelligence i...Can we morally justify the replacement of humans by artificial intelligence i...
Can we morally justify the replacement of humans by artificial intelligence i...
Kai Bennink
 
Artificial intelligence and Expert systems by dr. protik.pptx
Artificial intelligence and Expert systems by dr. protik.pptxArtificial intelligence and Expert systems by dr. protik.pptx
Artificial intelligence and Expert systems by dr. protik.pptx
PROTIKBANIK1
 
The Revolutionary Progress of Artificial Inteligence (AI) in Health Care
The Revolutionary Progress of Artificial Inteligence (AI) in Health CareThe Revolutionary Progress of Artificial Inteligence (AI) in Health Care
The Revolutionary Progress of Artificial Inteligence (AI) in Health Care
SindhBiotech
 
Md vs machine AI in Healthcare by Dr.Mahboob Khan Phd
Md vs machine AI in Healthcare by Dr.Mahboob Khan PhdMd vs machine AI in Healthcare by Dr.Mahboob Khan Phd
Md vs machine AI in Healthcare by Dr.Mahboob Khan Phd
Healthcare consultant
 
Benefits of AI for the Medical Field in 2023.
Benefits of AI for the Medical Field in 2023.Benefits of AI for the Medical Field in 2023.
Benefits of AI for the Medical Field in 2023.
Techugo
 
Here are the Benefits of AI for the Medical Field in 2023 and Beyond.pdf
Here are the Benefits of AI for the Medical Field in 2023 and Beyond.pdfHere are the Benefits of AI for the Medical Field in 2023 and Beyond.pdf
Here are the Benefits of AI for the Medical Field in 2023 and Beyond.pdf
Techugo
 
Here are the Benefits of AI for the Medical Field in 2023 and Beyond!.pdf
Here are the Benefits of AI for the Medical Field in 2023 and Beyond!.pdfHere are the Benefits of AI for the Medical Field in 2023 and Beyond!.pdf
Here are the Benefits of AI for the Medical Field in 2023 and Beyond!.pdf
Techugo
 
CLGPPT FOR DISEASE DETECTION PRESENTATION
CLGPPT FOR DISEASE DETECTION PRESENTATIONCLGPPT FOR DISEASE DETECTION PRESENTATION
CLGPPT FOR DISEASE DETECTION PRESENTATION
YashRajput82
 
Digital healthcare show - How will Artificial Intelligence in healthcare will...
Digital healthcare show - How will Artificial Intelligence in healthcare will...Digital healthcare show - How will Artificial Intelligence in healthcare will...
Digital healthcare show - How will Artificial Intelligence in healthcare will...
Dom Cushnan
 
MK PRESENTATION.pptx
MK PRESENTATION.pptxMK PRESENTATION.pptx
MK PRESENTATION.pptx
raja89790
 
20Q91A6753 (1) (1).pdf
20Q91A6753 (1) (1).pdf20Q91A6753 (1) (1).pdf
20Q91A6753 (1) (1).pdf
NeerajPoosala
 
Artificial intelligence in Health
Artificial intelligence in HealthArtificial intelligence in Health
Artificial intelligence in Health
KajolDahal1
 
ai_health.pdf
ai_health.pdfai_health.pdf
ai_health.pdf
Roshan Sinha
 
Spearheading Health Innovation with Internet of Things and Big Data
Spearheading Health Innovation with Internet of Things and Big DataSpearheading Health Innovation with Internet of Things and Big Data
Spearheading Health Innovation with Internet of Things and Big Data
NorAzmi Alias
 
The Future of mHealth - Jay Srini - March 2011
The Future of mHealth - Jay Srini - March 2011The Future of mHealth - Jay Srini - March 2011
The Future of mHealth - Jay Srini - March 2011
LifeWIRE Corp
 
Medical Device Development and Prototyping in San Jose.pdf
Medical Device Development and Prototyping in San Jose.pdfMedical Device Development and Prototyping in San Jose.pdf
Medical Device Development and Prototyping in San Jose.pdf
Alexander Sprauve
 
How artificial intelligence(AI) will change the world in 2021
How artificial intelligence(AI) will change the world in 2021How artificial intelligence(AI) will change the world in 2021
How artificial intelligence(AI) will change the world in 2021
kalyanit6
 
INITIATION OF ARTIFICAL INTILLIGENCE IN MEDICAL SCHOOLS [Autosaved].pdf
INITIATION OF ARTIFICAL INTILLIGENCE IN MEDICAL SCHOOLS [Autosaved].pdfINITIATION OF ARTIFICAL INTILLIGENCE IN MEDICAL SCHOOLS [Autosaved].pdf
INITIATION OF ARTIFICAL INTILLIGENCE IN MEDICAL SCHOOLS [Autosaved].pdf
Society for Microbiology and Infection care
 
Artificial intelligence healing custom healthcare software development problems
Artificial intelligence  healing custom healthcare software development problemsArtificial intelligence  healing custom healthcare software development problems
Artificial intelligence healing custom healthcare software development problems
Katy Slemon
 
Role of artificial intelligence in health care
Role of artificial intelligence in health careRole of artificial intelligence in health care
Role of artificial intelligence in health care
Prachi Gupta
 

Similar to Advanced_Science_future_of_gov_en (6) (20)

Can we morally justify the replacement of humans by artificial intelligence i...
Can we morally justify the replacement of humans by artificial intelligence i...Can we morally justify the replacement of humans by artificial intelligence i...
Can we morally justify the replacement of humans by artificial intelligence i...
 
Artificial intelligence and Expert systems by dr. protik.pptx
Artificial intelligence and Expert systems by dr. protik.pptxArtificial intelligence and Expert systems by dr. protik.pptx
Artificial intelligence and Expert systems by dr. protik.pptx
 
The Revolutionary Progress of Artificial Inteligence (AI) in Health Care
The Revolutionary Progress of Artificial Inteligence (AI) in Health CareThe Revolutionary Progress of Artificial Inteligence (AI) in Health Care
The Revolutionary Progress of Artificial Inteligence (AI) in Health Care
 
Md vs machine AI in Healthcare by Dr.Mahboob Khan Phd
Md vs machine AI in Healthcare by Dr.Mahboob Khan PhdMd vs machine AI in Healthcare by Dr.Mahboob Khan Phd
Md vs machine AI in Healthcare by Dr.Mahboob Khan Phd
 
Benefits of AI for the Medical Field in 2023.
Benefits of AI for the Medical Field in 2023.Benefits of AI for the Medical Field in 2023.
Benefits of AI for the Medical Field in 2023.
 
Here are the Benefits of AI for the Medical Field in 2023 and Beyond.pdf
Here are the Benefits of AI for the Medical Field in 2023 and Beyond.pdfHere are the Benefits of AI for the Medical Field in 2023 and Beyond.pdf
Here are the Benefits of AI for the Medical Field in 2023 and Beyond.pdf
 
Here are the Benefits of AI for the Medical Field in 2023 and Beyond!.pdf
Here are the Benefits of AI for the Medical Field in 2023 and Beyond!.pdfHere are the Benefits of AI for the Medical Field in 2023 and Beyond!.pdf
Here are the Benefits of AI for the Medical Field in 2023 and Beyond!.pdf
 
CLGPPT FOR DISEASE DETECTION PRESENTATION
CLGPPT FOR DISEASE DETECTION PRESENTATIONCLGPPT FOR DISEASE DETECTION PRESENTATION
CLGPPT FOR DISEASE DETECTION PRESENTATION
 
Digital healthcare show - How will Artificial Intelligence in healthcare will...
Digital healthcare show - How will Artificial Intelligence in healthcare will...Digital healthcare show - How will Artificial Intelligence in healthcare will...
Digital healthcare show - How will Artificial Intelligence in healthcare will...
 
MK PRESENTATION.pptx
MK PRESENTATION.pptxMK PRESENTATION.pptx
MK PRESENTATION.pptx
 
20Q91A6753 (1) (1).pdf
20Q91A6753 (1) (1).pdf20Q91A6753 (1) (1).pdf
20Q91A6753 (1) (1).pdf
 
Artificial intelligence in Health
Artificial intelligence in HealthArtificial intelligence in Health
Artificial intelligence in Health
 
ai_health.pdf
ai_health.pdfai_health.pdf
ai_health.pdf
 
Spearheading Health Innovation with Internet of Things and Big Data
Spearheading Health Innovation with Internet of Things and Big DataSpearheading Health Innovation with Internet of Things and Big Data
Spearheading Health Innovation with Internet of Things and Big Data
 
The Future of mHealth - Jay Srini - March 2011
The Future of mHealth - Jay Srini - March 2011The Future of mHealth - Jay Srini - March 2011
The Future of mHealth - Jay Srini - March 2011
 
Medical Device Development and Prototyping in San Jose.pdf
Medical Device Development and Prototyping in San Jose.pdfMedical Device Development and Prototyping in San Jose.pdf
Medical Device Development and Prototyping in San Jose.pdf
 
How artificial intelligence(AI) will change the world in 2021
How artificial intelligence(AI) will change the world in 2021How artificial intelligence(AI) will change the world in 2021
How artificial intelligence(AI) will change the world in 2021
 
INITIATION OF ARTIFICAL INTILLIGENCE IN MEDICAL SCHOOLS [Autosaved].pdf
INITIATION OF ARTIFICAL INTILLIGENCE IN MEDICAL SCHOOLS [Autosaved].pdfINITIATION OF ARTIFICAL INTILLIGENCE IN MEDICAL SCHOOLS [Autosaved].pdf
INITIATION OF ARTIFICAL INTILLIGENCE IN MEDICAL SCHOOLS [Autosaved].pdf
 
Artificial intelligence healing custom healthcare software development problems
Artificial intelligence  healing custom healthcare software development problemsArtificial intelligence  healing custom healthcare software development problems
Artificial intelligence healing custom healthcare software development problems
 
Role of artificial intelligence in health care
Role of artificial intelligence in health careRole of artificial intelligence in health care
Role of artificial intelligence in health care
 

Advanced_Science_future_of_gov_en (6)

  • 1. Written by Advanced science and the future of government Robots and Artificial Intelligence Genomic Medicine Biometrics WORLD GOVERNMENT SUMMIT THOUGHT LEADERSHIP SERIES
  • 2.
  • 3. Cover image - © vitstudio/Shutterstock While every effort has been taken to verify the accuracy of this information, The Economist Intelligence Unit Ltd. cannot accept any responsibility or liability for reliance by any person on this report or any of the information, opinions or conclusions set out in this report.
  • 4. Advanced science and the future of government © The Economist Intelligence Unit Limited 2016 1 Robots and Artificial Intelligence Genomic Medicine Biometrics Contents Introduction 4 Chapter 1: Robots and Artificial Intelligence 11 Chapter 2: Genomic Medicine 36 Chapter 3: Biometrics 60
  • 5. Advanced science and the future of government © The Economist Intelligence Unit Limited 20162 Robots and Artificial Intelligence Genomic Medicine Biometrics Foreword Advanced science and the future of government is an Economist Intelligence Unit report for the 2016 World Government Summit to be held in the UAE. The report contains three chapters: 1. Robots and Artificial Intelligence 2. Genomic Medicine 3. Biometrics The findings are based on an extensive literature review and an interview programme conducted by the Economist Intelligence Unit between September-December 2015. This research was commissioned by the UAE Government Summit. The Economist Intelligence Unit would like to thank the following experts who participated in the interview programme. Robots and Artificial Intelligence Frank Buytendijk – Research VP Distinguished Analyst, Gartner Dr Andy Chun – Associate Professor, Department of Computer Science, City University Hong Kong Tom Davenport – President’s Distinguished Professor of Information Technology Management, Babson College Martin Ford – Author, Rise of the Robots: Technology and the Threat of a Jobless Future and winner of the Financial Times and McKinsey Business Book of the Year award, 2015 Sir Malcolm Grant CBE – Chairman of NHS England Taavi Kotka – Chief Information Officer, government of Estonia Paul Macmillan – DTTL Global Public Sector Industry Leader, Deloitte Liam Maxwell – Chief Technology Officer, UK Government Prof Jeff Trinkle – Director of the US National Robotics Initiative Gerald Wang – Program Manager for the IDC Asia/Pacific Government Insights Research and Advisory Programs Genomic Medicine Karen Aiach – CEO, Lysogene. Dr George Church – Professor of Genetics at Harvard Medical School and Director of PersonalGenomes. org. Dr Bobby Gaspar – Professor of Paediatrics and Immunology at the UCL Institute of Child Health and Honorary Consultant in Paediatric Immunology at Great Ormond Street Hospital for Children. Dr Eric Green – Director of the National Human Genome Research Institute Dr Kári Stefánsson – CEO, deCODE Dr Jun Wang – former CEO, the Beijing Genomics Institute
  • 6. Advanced science and the future of government © The Economist Intelligence Unit Limited 2016 3 Robots and Artificial Intelligence Genomic Medicine Biometrics Biometrics Dr Joseph Atick – Chairman of Identity Counsel International Daniel Bachenheimer – Technical Director, Accenture Unique Identity Services Kade Crockford – ACLU Director of Technology for Liberty Program Mariana Dahan – World Bank Coordinator for Identity for Development Dr Alan Gelb – Senior Fellow at the Center for Global Development Dr Richard Guest – Senior Lecturer in Computer Science at the University of Kent Terry Hartmann – Vice-President of Unisys Global Transportation and Security Georg Hasse – Head of Homeland Security Consulting at Secunet Jennifer Lynch – Senior Staff Attorney at the Electronic Frontier Foundation C. Maxine Most – Principal at Acuity Market Intelligence Dr Edgar Whitley – Associate Professor in Information Systems at the LSE The Economist Intelligence Unit bears sole responsibility for the content of this report. The findings and views expressed in the report do not necessarily reflect the views of the commissioner. The report was produced by a team of researchers, writers, editors, and graphic designers, including: Conor Griffin – Author and editor (Robots and Artificial Intelligence; Genomic Medicine) Adam Green – Editor (Biometrics) Michael Martins – Author (Biometrics) Maria-Luiza Apostolescu – Researcher Norah Alajaji – Researcher Dr. Bogdan Popescu - Adviser Dr Annie Pannelay – Adviser Gareth Owen – Graphic design Edwyn Mayhew - Design and layout For any enquiries about the report, please contact: Conor Griffin Principal, Public Policy The Economist Intelligence Unit Dubai | United Arab Emirates E: conorgriffin@eiu.com Tel: + 971 (0) 4 433 4216 Mob: +971 (0) 55 978 9040 Adam Green Senior Editor The Economist Intelligence Unit Dubai | United Arab Emirates E: adamgreen@eiu.com Tel: + 971 (0) 4 433 4210 Mob: +971 (0) 55 221 5208
  • 7. Advanced science and the future of government © The Economist Intelligence Unit Limited 20164 Robots and Artificial Intelligence Genomic Medicine Biometrics Introduction Governments need to stay abreast of the latest developments in science and technology, both to regulate such activity, and to utilise the new developments in their own service delivery. Yet the pace of change is now so rapid it can be difficult for policymakers to keep up. Identifying what developments to focus on is a major challenge. Some are subject to considerable hype, only to falter when they are applied outside the laboratory. Why focus on robots and AI, genomic medicine, and biometrics? This report focuses on three advances which are the subject of considerable excitement today: robots and artificial intelligence (AI); genomic medicine; and biometrics. The three share common characteristics. For instance, they all run on data, and their rise has led to concerns about privacy rights and data security. In some cases, they are progressing in tandem. Genomic medicine is generating vast amounts of DNA data and practitioners are using AI to analyse it. AI also powers biometric facial and iris recognition. These are not the only developments that are relevant to governments, of course. Virtual reality headsets embed a user’s brain in an immersive 3D world. Surgeons could use them to practise risky surgeries on human-like patients, while universities are already using them to design enhanced classes for students. 3D printing produces components one layer at a time, allowing for more intricate design, as well as reducing waste. Governments are starting to use the technology to “print” public infrastructure, such as a new footbridge in Amsterdam, designed by the Dutch company MX3D. Nanotechnology describes the manipulation of individual atoms and molecules on a tiny scale – one nanometer is a billionth of a metre. Nanoscale drug delivery could target cancer cells with new levels of accuracy, signalling a major advance in healthcare quality. Brain-mapping programmes like the US government-funded BRAIN initiative could allow mankind to finally understand the inner workings of the human brain and usher in revolutionary treatments for conditions such as Alzheimer’s disease and depression. However, robots and AI, genomic medicine, and biometrics share three characteristics which mark them out as especially critical for governments. First, all three offer a clear way to improve, and in some cases revolutionise, how governments deliver their services, as well as improving overall government performance and efficiency. The three developments have also been trialled, to a certain extent, and so there is growing evidence on their effectiveness and how they can be best implemented. Finally, they are among the most transformative developments in terms of the degree to which they could change the way people live and work.
  • 8. Advanced science and the future of government © The Economist Intelligence Unit Limited 2016 5 Robots and Artificial Intelligence Genomic Medicine Biometrics 1. Robots and AI – Their long-heralded arrival is finally here Robots and artificial intelligence (AI) can automate and enhance work traditionally done by humans. Often they operate together, with AI providing the robot with instructions for what to do. Google’s driverless cars are a much-cited example. The subject is of critical importance for governments. Robots are moving beyond their traditional roles in logistics and manufacturing and AI is already far more advanced than many people realise – powering everything from Apple’s personal assistant, Siri, to IBM’s Watson platform. Much of today’s AI is based on a branch of computer science known as machine learning, where algorithms teach themselves how to do tasks by analysing vast amounts of data. It has been boosted by rapid expansions in computer processing power; a deluge of new data; and the rise of open-source software. Today, AI algorithms are answering legal questions, creating recipes, and even automating the writing of some news articles. Robots and artificial intelligence – A combined approach Source: EIU Robots and AI have the potential to greatly enhance the work of governments and the public sector, by supporting automation, personalisation, and prediction. Automated exam grading can free up human teachers to focus on teaching, while automated robot dispensaries have reduced error rates in pharmacies. Governments can emulate Netflix, an online video service, by using AI to personalise the transactional services they provide to citizens. Crime-prediction algorithms are allowing police to intervene before a crime takes place. Some worry about a future era of “superintelligence”, led by advanced machines that are beyond the comprehension of humans. Others worry, with good reason, about the nearer-term effects on jobs and security. As a result, governments need to strike the right balance between supporting the rise of robots and AI, and managing their negative side effects. Capable of doing the knowledge work traditionally done by humans. Can provide instructions to the robot for what to do. Capable of doing the manual work traditionally done by humans. Can take action based on the instructions. Artificial intelligence Robots
  • 9. Advanced science and the future of government © The Economist Intelligence Unit Limited 20166 Robots and Artificial Intelligence Genomic Medicine Biometrics How will robots and AI benefit governments? Source: EIU 2. Genomic medicine – Ushering in a new era of personalisation Genomic medicine uses an individual’s genome – ie, their unique set of genes and DNA – to personalise their healthcare treatment. Genomic medicine’s advance has been boosted by two major developments. First, new technology has made it possible, and affordable, for anybody to quickly map their own genome. Second, new gene-editing tools allow practitioners to “find and replace” the mutations within genes that give rise to disorders. Initiatives for sequencing genomes around the world Source: EIU 3 key benefits Automatio n Per sonalisation Prediction and prevention Administration Health social care Education Justice policing Transactional services Transport emergency response 1990-2003 2005- 2008-2015 2013- 2013- 2013- Human Genome Project An international research collaboration to carry out the first ever sequencing of the human genome. An international research project that sequenced more than 2,500 genomes and identified many rare variations. The Harvard-led project aims to sequence and publish the genomic data of 100,000 volunteers. A 4-year project led to sequence 100,000 genomes from UK NHS patients with rare diseases and cancers, and their families. A project to sequence up to 500 individuals from Qatar, Bahrain, Kuwait, UAE, Tunisia, Lebanon, and KSA. A 5-year project to analyse more than 20,000 Saudi genomes to better understand the genetic basis of disease. Personal Genome Project 100,000 Genomes Project Saudi Human Genome Program Genome Arabia 1,000 Genomes Project Start date
  • 10. Advanced science and the future of government © The Economist Intelligence Unit Limited 2016 7 Robots and Artificial Intelligence Genomic Medicine Biometrics Much of genomic medicine is relatively straightforward. Rare disorders caused by mutations in single genes are already being treated through gene editing. In time, these disorders may be eradicated altogether. For common diseases, such as cancer, patients’ genomic data could lead to more sophisticated preventative measures, better detection, and personalised treatments. Other potential applications of genomic medicine are mind-boggling. For instance, researchers are exploring whether gene editing could make animal organs suitable for human transplant, and whether “gene drives” in mosquito populations could help to eradicate malaria. The fast pace of development has given rise to ethical concerns. Some worry that prospective parents may try to edit desirable traits into their embryos’ genes, to try to increase their baby’s attractiveness or intelligence, for example. This, critics argue, is the fast route back to eugenics and governments need to respond appropriately. How will genomic medicine affect healthcare? Source: EIU 3. Biometrics – Mapping citizens, improving services A biometric is a unique physical and behavioural trait, like a fingerprint, iris, or signature. Unique to every person, and collectable through scanning technologies, biometrics provides every person with a unique identification which can be used for everything from authorising mobile phone bank payments to quickly locating medical records after an accident or during an emergency. Humans have used biometrics for hundreds of years, with some records suggesting fingerprint- based identification as far back as the Babylonian era of 500 B.C. But its true scale is only now being realised, thanks to rapid developments in technology and the growing need for a more secure and efficient way of identifying individuals. From a landmark national identification initiative in India to border control initiatives in Singapore, the US and the Netherlands, biometrics can be used in a wide range of government services. It is improving the targeting of welfare payments; helping to cut absenteeism among government workers; and improving national security. However, its use raises ethical challenges that governments need to manage – privacy issues, the risk of “mission creep”, data security, public trust, and the financial sustainability of new technology systems. How can governments both utilise the benefits of biometric tools and manage the risks? The challenge How can genomic medicine help? Rare disorders (eg, cystic fibrosis) Common diseases (eg, cancer, alzheimer's) Epidemic diseases and a lack of organ donors Diagnosing, treating and eradicating Enhancing screening, prevention and treatment Gene drives and next-gen transplants
  • 11. Advanced science and the future of government © The Economist Intelligence Unit Limited 20168 Robots and Artificial Intelligence Genomic Medicine Biometrics What is biometrics? Source: EIU How is biometrics being used by governments? Source: EIU Typesof biometrics Physiological Behavioural Vein-pattern Palm-pattern Facial Fingerprint Iris DNA KeystrokeSignature Voice Secure digital services Reducing health costs Biometric roll calls Targeted welfare Virtual justice Eliminating ghost workers Biometric elections Smart borders
  • 12. Advanced science and the future of government © The Economist Intelligence Unit Limited 2016 9 Robots and Artificial Intelligence Genomic Medicine Biometrics How does this report help policymakers? This report is designed to help policymakers in three ways. First, robots and AI, genomic medicine, and biometrics are technical topics and it can be difficult for non-experts to understand exactly what they are. What’s more, they are often poorly explained in the media articles that report on them. This can lead to misunderstandings, particularly when it comes to the risk of imminent negative consequences. This report aims to address this, by providing a clear and concise overview of what each of the advances entails, as well as summarising how they have developed to date. Second, discussions about the impact of robots, AI, genomic medicine and biometrics often focus on their use in the private sector. However, advances in all three fields could transform how governments deliver services, as well as enhancing government productivity and efficiency. This report describes these potential impacts on governments’ work, citing examples from around the world. Finally, advances in all three areas require a response from governments. In some cases, new legislation and policies will be needed. For instance, new guidelines are required for storing biometric data for law-enforcement purposes to guard against the possible targeting of ethnic minorities. Companies must be forbidden from using citizens’ genomic data to discriminate against them. Robots and AI will cause some jobs to disappear and so policies such as guaranteed incomes will need consideration. In other cases, governments will need to support the advances by unblocking bottlenecks. For instance, universities and hospitals will need to design new courses for students and staff on how to use, store, and analyse patients’ genomic data. In certain situations, particularly those involving ethical issues, the optimal response is unclear and is likely to differ across countries. For instance, how does AI interpreting data from surveillance cameras affect “traditional” privacy rights? Should the government support research into the genetic basis of intelligence? This report provides guidance to government leaders, who must answer these tough questions in the years ahead.
  • 13. Chapter 1: Robots and Artificial Intelligence
  • 14. Advanced science and the future of government © The Economist Intelligence Unit Limited 2016 11 Robots and Artificial Intelligence Genomic Medicine Biometrics Executive Summary Background Jeopardy! is a long-running American quiz show with a famous twist. Instead of the presenter asking contestants questions, he provides them with answers. The contestants must then guess the correct question. In 2011, a first-time contestant called Watson shocked viewers when it beat Jeopardy!’s two greatest-ever champions – who between them had won more than US$5m. Although it sounded human, Watson was actually a machine created by IBM and powered by AI. Some dismissed the achievement as trivial. After all, computers have been beating humans at chess for years. However, winning Jeopardy! was a far bigger achievement. It required Watson to understand tricky colloquial language (including puns), draw on vast pools of data, reason as to the best response, and then annunciate this clearly at the right time. Although it was only a TV quiz show, Watson’s victory offered a vision of the future, where robots and AI potentially carry out a growing portion of the work traditionally done by humans. Robots and Artificial intelligence (AI) can automate and enhance the work that is traditionally done by humans. Often they operate together, with AI providing the robot with instructions for what to do. Google’s driverless cars are a prominent example. The subject is of critical importance. Robots are moving beyond their traditional roles in logistics and manufacturing. AI is already far more advanced than many people realise – powering everything from Apple’s personal assistant, Siri, to IBM’s Watson platform. Much of today’s AI is based on a field of computer science known as machine learning, where algorithms teach themselves how to do tasks by analysing vast amounts of data. It has been boosted by rapid expansions in computer processing power; a deluge of new data; and the rise of open-source software. Today, AI algorithms are answering legal questions, creating recipes, and even automating the writing of some news articles. Some worry about a new era of “superintelligence”, led by advanced machines that are beyond the comprehension of humans. Others worry about the near-term effects on jobs and security. Critically, however, robots and AI also have the potential to greatly enhance government work. Automated exam grading can free up human teachers to focus on teaching, while automated robot dispensaries have reduced error rates in pharmacies. Governments can emulate Netflix, an online-video service, by using AI to offer personalised transactional services. Crime- prediction algorithms are allowing police to intervene before a crime take place. This chapter starts with an overview of what exactly robots and AI are, before explaining why they are now experiencing rapid uptake, when they haven’t in the past. It then assesses how robots and AI can improve the work of governments in areas as diverse as education, justice, and urban planning. The chapter concludes with suggestions for government leaders on how to respond.
  • 15. Advanced science and the future of government © The Economist Intelligence Unit Limited 201612 Robots and Artificial Intelligence Genomic Medicine Biometrics Robots and AI: What are they? Defining robots and AI is difficult since they cover a vast spectrum of technologies – from the machines zooming around Amazon’s warehouses to the automated algorithms that account for an estimated 70% of trades on the US stock market.1 One approach is to think in terms of capabilities. Robots are machines that are capable of automating and enhancing the manual work done by humans. AI is software that is capable of automating and enhancing the knowledge-based work done by humans. Often they operate together, with AI providing the robot with instructions for what to do. Robots and AI do not simply mimic what humans do – they can draw on their own strengths. In some cases, this allows them to do things that no human, no matter how smart or physically powerful, could ever do. Robots and artificial intelligence – A combined approach Source: EIU Robots – New shapes, new sizes; More automated, more capable The term “robot” is derived from a Slavic word meaning “monotonous” or “forced labour”, and gained popularity through the work of science fiction authors such as Isaac Asimov. In the 1950s, the Massachusetts Institute of Technology (MIT) demonstrated the first robotic arm and in 1961, General Motors installed a 4,000 lb version in its factory and tasked it with stacking die-cast metal. Over time, the use of robots in logistics and manufacturing grew. However, their long-heralded entrance into other sectors, such as fast food and healthcare, is yet to be realised. This looks set to change. When people think about robots, they typically think about humanoids – ie, those that look and act like humans. In June 2015, South Korea’s DRC-HUBO humanoid won the annual DARPA Robotics Challenge after demonstrating an impressive ability to switch between walking and “wheeling”. However, humanoids remain limited. They are prone to falling over and have trouble dealing with uncertain terrain. The logic behind developing them is also questionable. While humans can carry out an impressive range of tasks, we are not necessarily well suited to many of them – our arms are too weak, our fingers are too slow, and most of us are too big to get into tight spaces. Building robots to emulate humans might thus be a self-limiting approach. A separate breed of robot is more promising. These look nothing like humans. Instead, they are Capable of doing the knowledge work traditionally done by humans. Can provide instructions to the robot for what to do. Capable of doing the manual work traditionally done by humans. Can take action based on the instructions. Artificial intelligence Robots
  • 16. Advanced science and the future of government © The Economist Intelligence Unit Limited 2016 13 Robots and Artificial Intelligence Genomic Medicine Biometrics designed entirely with their environment in mind and come in many shapes and sizes. Kiva robots (since renamed as Amazon robotics) look like large ice hockey pucks. They glide under boxes of goods and transfer them across Amazon’s warehouses. They bear little resemblance to the Prime Air drone robots that Amazon wants to use to deliver packages; or to the Da Vinci, the world’s most popular surgical robot, which looks like a set of octopus arms. While they look different, this breed of robot shares a common goal: mastering a narrow band of tasks by using the latest advancements in robotic movement and dexterity. These robots also differ in their degree of automation. Amazon’s Kiva robots operate largely independently, and few humans are visible in the next-generation warehouses where they operate – Amazon’s management forecasts that their use will lead to a 20-40% reduction in operating costs.2 By contrast, the Da Vinci remains directly under the control of human surgeons – essentially providing them with extended “superarms” with capabilities and precision far beyond their own. Modern-day robots in action Source: EIU Artificial intelligence – Finally living up to its potential? Today most people come across AI on a daily basis. It powers everything from Google Translate, to Netflix’s movie recommendations, to Apple’s personal adviser, Siri. However, much of this Amazon robots Da Vinci Sawyer Paro The robots move around Amazon warehouses independently, collecting goods and bringing them to human packagers for dispatch. Miniaturised surgical instruments are mounted on three robotic arms, while a fourth arm contains a 3D camera that places a surgeon inside the patient's body. A robot produced by Rethink Robotics that is used in factories to tend to machines and to test circuit boards. Works alongside humans. Developed by a Japanese firm called AIST to interact with patients suffering from Alzheimer's, and other cognition disorders. Agrobot A robot developed by a Spanish entrepreneur that automates the process of picking fruits. Spiderbot A robot created by Intel that is made up 3D-printed components. Can be controlled via a smartphone or smartwatch. Types What are they and what do they do? AI powers everything from Google Translate, to Netflix’s movie recommendations, to Apple’s personal advisor, Siri.
  • 17. Advanced science and the future of government © The Economist Intelligence Unit Limited 201614 Robots and Artificial Intelligence Genomic Medicine Biometrics AI is “invisible” and takes place behind a computer screen, so many users have little idea that it is happening. The field of AI emerged in the 1950s when Alan Turing, a pioneering British codebreaker during the second world war, published a landmark study in which he speculated about the possibility of creating machines that could think.3 In 1956, the Dartmouth Conference in the US asked leading scientists to debate whether human intelligence could be “so precisely described that a machine can be made to simulate it”.4 At the conference, the nascent field was christened “artificial intelligence”, and wider interest (and investment) began to grow. However, the subsequent half-century brought crushing disappointment. In many cases there was simply not enough data or processing power to bring scientists’ models, and the nuances of human intelligence, to life. Today, however, many experts believe that we are entering a golden era for AI. Firms like Google, Facebook, Amazon and Baidu agree and have started an AI arms race: poaching researchers, setting up laboratories, and buying start-ups. To understand what AI is and why it is now developing, it is necessary to understand the nature of the human intelligence that it is trying to replicate. For instance, solving a complex mathematical equation is difficult for most humans. To do so, we must learn a set of rules and then apply them correctly. However, programming a computer to do this is easy. This is one reason why computers long ago eclipsed humans at “programmatic” games like chess which are based on applying rules to different scenarios. On the other hand, many of the tasks that humans find easy, such as identifying whether a picture is showing a cat or a dog, or understanding what someone is saying, are extremely difficult for computers because there are no clear rules to follow. AI is now showing how this can be done, and much of it is based on a field of computer science known as machine learning. Machine learning – The algorithms that power AI Machine learning is a way for computer programs (or algorithms) to teach themselves how to do tasks. They do so by examining large amounts of data, noting patterns, and then assessing new data against what they have learned. Unlike traditional computer programs, they don’t need to be fed with explicit rules or instructions. Instead, they just need a lot of useful data. Consider the challenge of looking at a strawberry and assessing whether it is ripe. How can a machine-learning algorithm do this? First, large sets of “training data” are needed – that is, lots of pictures of strawberries. If each strawberry is labelled according to its level of ripeness, the algorithm can draw statistical correlations between each strawberry’s characteristics, such as nuances in size and colour, and its level of ripeness. The algorithm can then be unleashed on new pictures of strawberries and can use what it has learned to recognise those that are ripe. To perform this recognition, machine learning can use models known as artificial neural networks (ANNs). These are inspired by the human brain’s network of more than 100 billion neurons – Playing chess is easy for a computer but difficult for a human. When it comes to understanding what a person is saying, the opposite is true.
  • 18. Advanced science and the future of government © The Economist Intelligence Unit Limited 2016 15 Robots and Artificial Intelligence Genomic Medicine Biometrics interlinked cells that pass signals or messages between themselves, allowing humans to think and carry out everyday tasks. In a (somewhat crude) imitation of the brain, ANNs are built on hierarchical layers of transistors that imitate neurons, giving rise to the term “deep learning”. When it is shown a new picture of a strawberry, each layer of the ANN deals with a different approximation of the picture. The first layer may recognise the brightness and colours of individual pixels. It passes these observations to the next layer, which builds on them by recognising edges, shadows and shapes. The next layer builds on this again, before finally recognising that the image is showing a strawberry and assessing whether it is ripe or not. What can machine-learning algorithms do? A surprising amount Facebook’s AI laboratory has developed a machine-learning algorithm called Deep Face that recognises human faces with a 97% accuracy rate. It does so by studying a person’s existing Facebook pictures and identifying their unique facial characteristics (such as the distance between their eyes). When a new picture is uploaded to Facebook, the algorithm automatically recognises the people in it and invites you to tag them. How neural networks work Raw image Recognise who the person is Input layer INPUT Hidden layers Output layer OUTPUT A neural network is organised into layers. Information from individual pixels causes neurons in the first layer to pass signals to the second, which then passes its analysis to the third. Each layer deals with increasingly abstract concepts, such as edges, shadows and shapes, until the output layer attempts to categorise the entire image. How Facebook recognises your face
  • 19. Advanced science and the future of government © The Economist Intelligence Unit Limited 201616 Robots and Artificial Intelligence Genomic Medicine Biometrics Make a diagnosis Input layer INPUT Hidden layers Output layer OUTPUT Classify patterns and compare against evidence Age Gender Symptoms Smoking Diet Blood test Urine test Genomic data Symptoms Blood test Urine test Genomic dataa Diet Smoking Gender Age aa How to diagnose diseases Sources: The Economist, EIU The data that power algorithms do not need to be images. Algorithms can also make sense of articles, video recordings, or even messy, “unstructured” data such as handwritten notes. Once an algorithm has learned something, it can take an action, such as producing a written report explaining the logic of its prediction, or sending instructions to a robot for which pieces of fruit to pick. Taken as a whole, machine-learning algorithms can do many things – primarily tasks that are routine, or can be “learned” by analysing historical data. Talk into your phone and a Google app can instantly translate it into a foreign language. The results are imperfect, but improving, as algorithms draw on ever-larger “translation memory” databases to understand what words mean in different contexts. Netflix uses machine learning to “personalise” the homepage and movie recommendations that users see. Algorithms infer a user’s preferences based on their past interactions on the site (such as watching, scrolling, pausing, and ranking); the interactions of similar users; and contextual factors (time of day, device, location, etc.). They then predict the content that will be most receptive to the user. More surprisingly, machine learning is being applied to fields like writing and music composition. While most people would not consider these to be “routine”, they are also based on data patterns which can be learned and applied.
  • 20. Advanced science and the future of government © The Economist Intelligence Unit Limited 2016 17 Robots and Artificial Intelligence Genomic Medicine Biometrics Machine learning in action Source: EIU Not just learning, but teaching itself and improving Much of machine learning involves making predictions based on probability, but on a scale that a human brain could never achieve. An algorithm does not “know” that a strawberry is ripe in the same way that a human brain does. Rather, it predicts whether it is ripe according to its evaluation of data and comparing this with past evidence. Having labels for the training data (such as “ripe” and “rotten” for pictures of strawberries) makes things easier for the algorithm, but is not a prerequisite. “Unsupervised algorithms” take vast amounts of data that make little sense to a human. If they see enough repeated patterns they will make their own classifications. For instance, an algorithm may analyse massive sets of genomic data belonging to thousands of people and discover that certain gene mutations are associated with certain diseases (see chapter 2). In this scenario, the algorithm is teaching itself. Practitioners do not need to spend lifetimes crafting hugely complex algorithms. Rather, “genetic algorithms” are often used. As their name implies, they use trial-and-error to mimic the way natural selection works in the living world. With each run of the program, the highest-scoring algorithms are retained as “parents”. These are then “bred” to create the next generation of algorithm. Those that don’t work are discarded. Once they are in use, algorithms can improve themselves by analysing the accuracy of their predictions and making tweaks accordingly (known as “reinforcement learning”). Writing Quill is a platform that automates the writing of financial reports and sports articles for outlets like Forbes. Creating recipes IBM's Watson analysed the Bon Appétit recipe database to recognise tasty food pairings and created an app to suggest recipes based on the ingredients that a person has available. Financial advice Wealthfront is an AI-powered financial advisor that assess a person's characteristics (such as age and wealth), their objectives, and then uses investing techniques to suggest what assets to invest in. Music composition Iamus is an algorithm that is fed with specific information, such as which instruments should be used and what the desired duration should be. It then creates its own orchestral compositions from scratch. Video games DeepMind developed a general learning algorithm that exceeded all human players in popular video games, from Space Invaders to car racing games. It was purchased by Google in 2014.
  • 21. Advanced science and the future of government © The Economist Intelligence Unit Limited 201618 Robots and Artificial Intelligence Genomic Medicine Biometrics Survival of the fittest – How natural selection is applied to algorithms Source: EIU Robots and AI – Merged in symphony Driverless cars (and other automated vehicles) are perhaps the best example of how robots and AI can come together to awesome effect. The global positioning system (GPS) provides the robot (ie, the car) with a huge set of mapping data, while a set of radars, sensors, and cameras provide data on what is happening around it. Machine-learning algorithms evaluate all of this data and, based on what they have previously learned, issue real-time instructions for steering, braking, and accelerating. The new era of driverless vehicles Source: EIU Driverless trucks Driverless cars Drone planes Drone ships In May 2015, Daimler’s 18-wheeler Freightliner, called the Inspiration Truck, was unveiled. Google has been working on its self-driving car project since 2009. It is currently being tested in Austin and California in the US. DHL is using drones to deliver medicine to Juist, a small German island. Rolls-Royce Holdings launched a virtual-reality prototype of a drone ship in 2014. Driverless cars are perhaps the best example of how robots and AI can come together to awesome effect. Randomly generate initial population of algorithms Evaluate the fitness of each algorithm Does the algorithm meet the objective? Does the algorithm meet survival criteria? Combine/mutate the survivors to create next generation of algorithms Yes Yes Yes Done Kill solution No Next generation
  • 22. Advanced science and the future of government © The Economist Intelligence Unit Limited 2016 19 Robots and Artificial Intelligence Genomic Medicine Biometrics Other examples abound. Unlike harvesting corn, fruit picking still relies heavily on human hands. A Spanish firm called Agrobot promises a robotic alternative. Its robot harvester is equipped with 14 arms for picking strawberries. Each arm has a camera that takes 20 pictures per second. Algorithms analyse these images and assess the strawberries’ colour and shape against the desired level of “ripeness”. If a strawberry is judged to be ripe, the robot’s arm positions its basket underneath it, and a blade snips the stem. The whole process takes four seconds. Some human labour is needed to supervise the robot, but much less than what is required to pick strawberries manually. The robot can work night and day, and a new version, with 60 arms, is being trialled.5 The rise of robots and AI – Why now, and how far can it go? The standard joke about robots and AI is that, like nuclear fusion, they have been the future for more than half a century. Many techniques, like neural networks, date back to the 1950s. So why is today any different? The main reason is that the underlying infrastructure powering robots and AI has changed dramatically. First, the processing power of computer chips has grown exponentially. People are often vaguely familiar with Moore’s law – ie, the doubling every year of the number of transistors that can be put on a microchip. However, its impact is rarely fully appreciated. The designers of the first artificial neural networks in the 1960s had to rely on models with hundreds of transistor neurons. Today, those built by Google and Facebook contain millions. This allows AI programs to operate at a speed that is hard for a human to comprehend. Second, AI systems run on data, and we live in a world that is deluged – from social media posts, to the sensors that are now added to an array of machines and devices, to the vast archives of digitised reports, laws, and books. In the past, even if such data were available, storing and accessing it would have been cumbersome. Today, cloud computing means that much of it can be accessed from a laptop. In 2011, IBM’s Watson was the size of a room. Now it is spread across servers in the cloud and can serve customers across the world. Finally, robots and AI are increasingly accessible to the world, rather than just to scientists. DIY robot kits are much cheaper than industrial robots, and companies like EZ Robot even allow customers to “print” robot components using 3D printers. In August 2015, Intel presented its “spiderbot” – a spider-like robot constructed from 9,000 printed parts. A growing number of machine-learning algorithms are free and open-source, as is the software on which many robots run (Robot Operating System). This allows developers to quickly build on each other’s work. IBM has also made Watson available to developers, with the aim of unleashing a new ecosystem of Watson-powered apps – like those found in Apple’s iTunes store. How far can robots and AI develop? Robots and AI already offer the potential to automate, and possess five key human capabilities: movement, dexterity, sensing, reasoning and acting.
  • 23. Advanced science and the future of government © The Economist Intelligence Unit Limited 201620 Robots and Artificial Intelligence Genomic Medicine Biometrics How robots and AI emulate human capabilities Source: EIU How far can they expand beyond this? A famous test developed by Alan Turing is the “imitation game”. In it, an individual converses with two entities in separate rooms: one is a human and one is an AI-powered machine. If the individual is unable to identify which is which, the machine wins. Every year, the Loebner Prize is offered to any AI program that can successfully trick a panel of human experts in this way. To date, none has come close – although other competitions, with shorter test times, have claimed (much disputed) victories. The goal of the Turing test is to achieve what is known as “broad AI” – ie, AI that can do all of the things that the human brain can do, rather than just one or two narrow tasks. There are huge debates among scientists about whether broad AI will be achievable and, if so, when. One challenge is that much of how the human brain works remains a mystery, although projects such as the BRAIN Initiative in the US and the Blue Brain Project in Switzerland, which aim to build biologically detailed digital reconstructions of the human brain, aim to address this. A survey of leading scientists carried out by philosopher Nick Bostrom in 2013 found that most believed that there was a 50% chance of developing broad AI by 2040-50, and a 90% chance by 2075.6 If broad AI is achieved, some believe that it would then continue to self-improve, ushering in an era of “super intelligence” and a phenomenon known as the “technological singularity” (see below). Source: EIU MOVEMENT Being able to get from place to place. Robots move in many ways. Hexapods walk on six legs like an insect. Snakebots slither and can change the shape of their body. Wheelbots roll on wheels. Today's robots boast impressive dexterity. They can fold laundry, remove a nail from a piece of wood, and screw a cap on a bottle. Taking in data about the world, or about a problem. Computer vision can understand moving images, chemical sensors can recognise smells, sonar sensors can recognise sounds, and taste sensors can recognise flavours. Thinking about what a new set of data means. Machine learning analyses data to identify patterns or relationships. It can be used to understand speech, images, and natural language. It can assess new data against past evidence and make predictions or recommendations. Acting on what you have discovered. Natural language and speech generation can be used to document findings. Findings can also be given to a robot, as instructions for how to act. DEXTERITY Using one's hands to carry out various tasks. Human capabilities How AI does it SENSING REASONING ACTING How robots do it Human capabilities
  • 24. Advanced science and the future of government © The Economist Intelligence Unit Limited 2016 21 Robots and Artificial Intelligence Genomic Medicine Biometrics If AI can reach a level where it matches the full breadth of human intelligence, some futurists argue that its ability to self-improve, backed by ever-increasing computing power, will lead to an “intelligence explosion” and the rise of “super intelligence”. In such a scenario, machines would design ever-smarter machines, all of which would be beyond the understanding, or control, of even the smartest human. The resulting situation – the technological singularity – would be unpredictable and unfathomable to human intelligence. Some dream of a new utopia, while others worry that super-intelligent machines may not have humanity’s best interests at heart. The technological singularity’s most famous proponent is Ray Kurzweil, who predicts that it will occur around 2045. Kurzweil argues that humans will merge with the machines of the future, for instance through brain implants, in order to keep pace. Some “singularians” argue that super- intelligent machines will tap into enhancements in genomics and nanotechnology to carry out mind-boggling activities. For instance, “nanobots” – robots that work at the level of atoms or molecules – could create any physical object (such as a car or food) in an instant. Immortality could be achieved through new artificial organs or by uploading your mind into a robot.7 Perhaps not surprisingly, the technological singularity has been dismissed by critics and likened to a religious cult.8 However, it continues to be debated, largely because of the achievements of those advocating it. A serial inventor and futurist, Kurzweil made 147 predictions in 1990 of what would happen before 2009. These ranged from the digitisation of music, movies, and books to the integration of computers into eyeglasses. 86% of the predictions later proved to be correct.9 In 2012 he was hired by Google as its head of engineering. He also launched the Singularity University in Silicon Valley, which is sponsored by Google and Cisco, among others. Discussions about the technological singularity generate both fascination and derision. It would be unwise to dismiss it completely. While the human brain is complex, there is nothing supernatural about it – and this implies that building something similar inside a machine could, in principle, be possible. However, it is crucial to note that the vast majority of today’s AI work does not aspire to be Artificial Narrow Intelligence (ANI) Equals or exceeds human intelligence, but in narrow areas only, such as language translation, spam filters, and Netflix recommen- dations. Already in place and improving quickly. Artificial Broad Intelligence (ABI) Can perform the full range of intellectual tasks that a human can. No credible examples exist to date. Expert predictions range from 2030 to 2100 to never. Artificial Super Intelligence (ASI) Much smarter than the best human brains in every field, including scientific creativity, general wisdom and social skills. Assuming AGI is achieved, expert predictions suggest ASI will happen less than 30 years later. 2000 2075? 2100? Narrow AI v broad AI v super AI Case study: What is the technological singularity? Source: EIU
  • 25. Advanced science and the future of government © The Economist Intelligence Unit Limited 201622 Robots and Artificial Intelligence Genomic Medicine Biometrics “broad” or “super”. Rather it is “narrow” and fully focused on mastering individual tasks – especially those that are repetitive or based on patterns. Despite this apparent limitation, even narrow AI covers considerable ground. How will robots and AI affect government? There is much concern in policy circles about robots and AI. First there is the fear that they will destroy jobs. Such worries were fuelled in 2013 when a study by academics at Oxford University predicted that 47% of jobs were at risk of replacement by 2030.10 Notably, many “safe” middle-class professions, requiring considerable training, such as radiographers, accountants, judges, and pilots, appear to be at risk. Other jobs appear less at risk – for the moment – particularly those which are highly creative, unpredictable, or involve dealing with children, people who are ill, or people with special needs. Jobs at risk of automation from robots and AI Sources: Oxford University, EIU The second fear concerns security threats. These gained traction in January 2015, when a group of prominent thinkers, including Stephen Hawking and Elon Musk, signed an open letter calling for 0 20 40 60 80 100 Social workers Dentists High school teachers Chief executives Fitness trainers Electrical engineers Software developers Detectives Judges Economists Historians Computer programmers Pilots Real estate agents Paralegals Jewellers Fashion models Loan officers Telemarketers High risk of automation Lower risk of automation
  • 26. Advanced science and the future of government © The Economist Intelligence Unit Limited 2016 23 Robots and Artificial Intelligence Genomic Medicine Biometrics responsible oversight of AI to ensure that research focuses on “societal benefit”, rather than simply enhancing capabilities.11 Of particular concern is the risk posed by lethal autonomous weapons systems (LAWS). LAWS are different from the remotely piloted drones that are already used in warfare: drones’ targeting decisions are made by humans, whereas LAWS can select and engage targets without any human intervention. According to computer science professor Stuart Russell, they could include armed quadcopters that can seek and eliminate enemy combatants in a city.12 Often described as the third revolution in warfare, after gunpowder and nuclear arms, the first generation of LAWS are believed by experts interviewed by the Economist Intelligence Unit to being close to complete. The remaining barriers are legal, ethical, and political, rather than technical. Fears about jobs and security are worthy of government attention. Crucially, however, robots and AI also have the potential to greatly enhance the work of government. These improvements are possible today and some government agencies have already started trials. The benefits will come in three main forms and, in theory, could apply to almost all areas of a government’s work. How will robots and AI benefit governments? Source: EIU Automation: Robots and AI can automate and enhance some government work. Such automation will not necessarily spell the end for the employee in question. Rather, it could free up their time to do more valuable and interesting tasks. It could also eliminate the need for humans to undertake dangerous work such as defusing bombs. Personalisation: In the same way that AI powers Netflix recommendations for subscribers, it could also power a new generation of personalised government services and interactions – from personalised 3 key benefits Automatio n Per sonalisation Prediction and prevention Administration Health social care Education Justice policing Transactional services Transport emergency response
  • 27. Advanced science and the future of government © The Economist Intelligence Unit Limited 201624 Robots and Artificial Intelligence Genomic Medicine Biometrics treatment plans for patients, to personalised learning programmes for students and personalised parole sentences for prisoners. Prediction and prevention: One of AI’s main uses is making predictions based on what it has learned. In certain situations, such predictions could allow governments to intervene and prevent problems from occurring. This could dramatically enhance how police services and courts work, as well as support the strategies of urban planners. 1. Education: Automated exam grading, adaptive learning platforms, and robot kits Marking exams is repetitive and time-pressured – a combination that can allow mistakes to creep in. As most exams are graded on specific criteria, and past examples are available, it seems a sensible candidate for AI. In 2012, a study carried out by researchers at the University of Akron in Ohio tasked an AI-powered “e-rater” with assessing 22,000 English literature essays.13 The grades awarded were strikingly similar to those given by human evaluators. The main difference was the speed of the activity - the e-rater was able assess 16,000 essays in 20 seconds. It can also provide explanations for its marking – including comments on grammar and syntax. Unsurprisingly, the technology is not welcomed by all. In the US, the National Council of Teachers of English has campaigned against it, claiming that it misses subtlety and rewards writing that is geared solely towards test results (although this is arguably true of any criteria-based assessment). Critics have also demonstrated how the algorithms can be “tricked” into awarding high marks without actually writing a good essay. Despite the controversy, the role of e-raters looks set to grow – either as a check on teachers’ grading, or working under teachers’ supervision. Australia’s Curriculum, Assessment and Reporting Authority (ACARA) recently announced that e-raters will mark the country’s national assessment programme for literacy and numeracy by 2017. Less controversial are personalised education (or “adaptive learning”) programmes. As with healthcare, a great deal of today’s education is delivered on a one-size-fits-all basis – with most students using the same textbooks and doing the same homework. Teachers often have little option but to “teach to the middle” – resulting in advanced students becoming bored and struggling students falling behind.14 A company called Knewton has developed a platform that tracks students as they complete online classes in maths, biology, and English, and attempt multiple-choice questions. It assesses how each student performs and compares this with other students’ past records. It then decides what problem or piece of content to show next. Pearson, the world’s largest education company, has partnered with Knewton to deliver a similar service for college students called MyLab Mastering, which is used by more than 11m students worldwide every year.15 As with most AI-led solutions, there is a degree of hype about such platforms, and their accuracy and usefulness will depend on scale – the more users they have, the more accurate they will become. However, several platforms have merits, particularly in subjects such as maths that require students to master theories through repeated examples. More experimental systems can recognise a student’s emotional state (such as tiredness or boredom) Humanoid teachers are unlikely to enter classrooms in the near future, but robot teaching kits are increasingly common.
  • 28. Advanced science and the future of government © The Economist Intelligence Unit Limited 2016 25 Robots and Artificial Intelligence Genomic Medicine Biometrics and provide motivational advice to help them persevere. Given the complexity involved in interacting with children, humanoid robotic teachers are unlikely to take over classrooms in the near future. However, “robot kits”, such as the Lego Mindstorms series, are increasingly used in science, technology, engineering and maths (STEM) classes. Students are given the kits and asked to construct a robot and program it to carry out tasks. Evidence suggests that they can provide a more effective way of teaching various maths and engineering concepts – such as equations.16 In contrast to some traditional STEM teaching methods, they also help to build teamwork and problem-solving capabilities – key “21st-century skills” that schools are trying to nurture.17 2. Health social care: Personalised treatment, robot porters, and tackling ageing Much of the excitement around AI has focused on its potential use in healthcare. In 2015, @Point of Care, a firm based in New Jersey, trained IBM’s Watson to answer thousands of questions from doctors and nurses on symptoms and treatments, based on the most up-to-date peer-reviewed research. In an interview with the Economist Intelligence Unit, Sir Malcolm Grant, chairman of NHS (National Health Service) England, claimed that the combination of AI and patients’ genomic data could allow “clinicians to make more efficient use of expensive drugs, such as those used in chemotherapy, by attuning them to tumour DNA and then monitoring their effect through a course of treatment”. While much of AI’s potential in healthcare is still at the trial stage, robots are already present in hospitals. In 2015, the UK media reported excitedly about an NHS plan to introduce robot porters. The machines, which look similar to those found in Amazon’s warehouses, will transport trolleys of food, linen and medical supplies. In pharmacies, robot prescription systems are increasingly common. At the University of California San Francisco Medical Center, a doctor produces an electronic prescription and passes it to a robot arm that moves along shelves picking out the medicine needed. The pills are sorted and dispensed into packets for patients. Under the system, the error rate has fallen from 2.8% to 0%.18 Surgical robots are used in a growing number of operations, including coronary bypasses, hip replacements, and gynaecological surgeries. In the US, they carry out the majority of prostate cancer operations (radical prostatectomies).19 In certain operations, they offer greater precision and reduced scarring, and can reduce blood loss.20 However, they are also expensive and must remain under the close control of trained surgeons at all times. “The combination of AI and patients’ genomic data could allow clinicians to make more efficient use of expensive drugs.” - Sir Malcolm Grant, chairman of NHS England.
  • 29. Advanced science and the future of government © The Economist Intelligence Unit Limited 201626 Robots and Artificial Intelligence Genomic Medicine Biometrics Reduced blood loss Strengths Limitations Improved recovery times Less scarring Increased precision Not suitable for all surgery types Significant training required High fixed costs Bulkiness of equipment The pros and cons of surgical robots Source: EIU Robots have been touted as a way to address the global ageing phenomenon. Over the next 20 years, the number of people aged 65 and over will almost double to 1.1bn.21 Diseases such as dementia are set to become more prevalent, while the labour force – whose taxes pay for treatments – will shrink. Dementia patients often suffer from a lack of social engagement, which can magnify feelings of loneliness, leading to depression and cognitive decline.22 PARO, a socially assistive robot (SAR), looks like a baby seal and engages patients by acting like a pet. It can recognise when a person is calling its name and can learn from repeated behaviour – if a patient scratches its neck after picking it up off the floor, it will look for a scratch every time it gets picked up.23 PARO is approved by the US Food and Drug Administration as a therapeutic device, and a 2013 trial found that it had a moderate-to-large effect on patients’ quality of life.24 What type of medical robots might we see in the future? Scientists are excited about the potential of “micro-bots”. These tiny robots would move inside patients’ bodies – helping to deliver drugs, address trouble spots (such as a fluid build-up), or repair organs. Researchers are also working on robots that are soft and resemble body tissue. In the US, researchers at MIT have developed prototype “squishy robots” that can switch between hard and soft states and could, in theory, move through the body without damaging organs. 3. Justice security: Online dispute resolution and predictive policing AI is already in play in the legal world. Courts use automatic speech recognition to dictate court records outlining who said what during a trial. Judges and lawyers use apps like ROSS Intelligence, built on
  • 30. Advanced science and the future of government © The Economist Intelligence Unit Limited 2016 27 Robots and Artificial Intelligence Genomic Medicine Biometrics IBM’s Watson platform, to post questions such as “Is a bankrupt company allowed to do business?”. The app delivers instant answers, complete with citations and useful references to legislation or case law. In time, AI could usher in a new generation of automated online courts, particularly for the small civil disputes that often clog up judicial systems. Canada is launching an online tribunal for small civil disputes that will allow claimants to negotiate with the other party and, failing that, face an online adjudication (run by humans). AI could enhance the system by “predicting” the outcome of a dispute before claimants begin. An algorithm has already been developed that can predict the results of more than 7,000 US Supreme Court cases with more than 70% accuracy, using only data that was available before the case.25 If embedded into an online court, such predictive algorithms could encourage claimants to drop their claim (if ill-advised) or encourage the other party to settle. They could also be used by governments to channel legal aid more effectively, by identifying those who have a worthy case but no financial means of pursuing it. Police and security services are also using AI. Facial-recognition algorithms have been closely studied, and there is excitement over recent enhancements that allow them to recognise somebody even when their face is obscured. As revealed by Edward Snowden, the US National Security Agency also uses voice recognition software to convert phone calls into text in order to make the contents easier to search.26 Police are especially interested in using AI for “predictive policing”. As Tom Davenport, a professor at Babson College, put it, “Why should the police only show up after the crime has been committed?”. US firm PredPol analyses a feed of data on location, place and time of crimes to predict “hotspots” (areas of 500 feet squared) where crime is likely to happen within the next 12 hours. A study published in October 2015 found that the algorithm was able to predict 4.7% of crimes in Los Angeles, compared with 2.1% for experienced analysts. It concluded that deploying extra police in hotspot areas would save the Los Angeles Police Department US$9m per year.27 In Germany, researchers at the Institute for Pattern-Based Prediction Techniques have developed an algorithm for predicting burglaries based on the “near repeat” concept – ie, in an area where a burglary happens, repeated offences can be expected nearby within a short time frame. The algorithm predicts burglaries within a radius of about 250 metres, and a time window of between 24 hours and seven days. The institute claims that in the 18 months since its implementation in certain trial cities, arrests have doubled thanks to additional patrolling, and the number of burglaries has fallen by as much as 30%.28 However, such approaches do raise ethical questions. For instance, if algorithms suggest that a crime is more likely in areas populated by certain ethnic groups, should police carry out more intensive patrols, or this a new form of racial discrimination? Newer algorithms are looking beyond past crime data to inform their predictions. In 2015, researchers at the University of Virginia examined how Twitter posts could be assessed to predict crime (although the legal environment for this activity is hazy in many countries). Their algorithm also drew “Why should the police only show up after the crime has been committed?”- Professor Tom Davenport, Babson College.
  • 31. Advanced science and the future of government © The Economist Intelligence Unit Limited 201628 Robots and Artificial Intelligence Genomic Medicine Biometrics Predicting the likelihood of somebody re-offending Predicting crime hotspots Predicting the outcome of a court case Decisions on parole and length of prison sentences Assigning extra police cover in risky areas Stronger incentives to mediate early and avoid court on weather forecasts – different types of extreme weather conditions have been shown to lead to spikes in crime.29 The researchers claim that its accuracy is greater than that of models that use only historical crime data.30 Predictive policing can also come in other guises. In the wake of the marathon bombing of 2013, the city of Boston trialled predictive surveillance cameras. The AISight (pronounced “eyesight”) platform, also in use in Chicago and Washington DC, starts by learning when a surveillance camera is showing “typical behaviour”, such as somebody walking normally along a street. It then learns “untypical behaviour” that is associated with crimes, such as unusual loitering or a flurry of movement that may indicate that a fight is breaking out. It can then monitor surveillance cameras for abnormal behaviour, and send alerts to authorities when it spots something. Unsurprisingly, the system has aroused privacy concerns, although supporters argue that it is less damaging than more discriminatory attempts to prevent crime, such as stop-and-search interrogations based on racial profiling. Justice and policing – The benefits of prediction Source: EIU 4. Administration: Automating visa processing, patent applications, and fraud detection A significant amount of day-to-day government bureaucracy is routine and based on deciding whether a person qualifies for something – be it a pension top-up or a visa. Much like marking exam papers, civil servants must apply several rules to each case, which can lead to backlogs and mistakes creeping in. If the qualification guidelines for such processes are clear, AI can speed up processing, according to Andy Chun, associate professor of computer science at City University in Hong Kong. Chun worked on an algorithm for the Hong Kong government to process immigration, passport and visa applications. With millions of forms received yearly, the immigration office had previously struggled to meet demand. In an interview with the Economist Intelligence Unit, Chun explained that “If the qualification guidelines for pensions or visas are clear, AI can speed up processing” - Andy Chun, Associate Professor of Computer Science at City University in Hong Kong.
  • 32. Advanced science and the future of government © The Economist Intelligence Unit Limited 2016 29 Robots and Artificial Intelligence Genomic Medicine Biometrics the algorithm approves some applications, rejects others, and classifies the remainder as “grey areas” where human judgement is needed. In these cases, the algorithm absorbs the choices that humans make to allow for future automation. IP Australia, the country’s intellectual property agency, is automating patent searches – the process by which a proposed invention is examined against existing inventions. A similar approach could be used to detect tax fraud. Today, governments rely on forensic accountants and lawyers wading through mountains of paperwork, such as annual business filings, to detect possible cases of tax fraud. An MIT researcher, Jacob Rosen, has explored an AI-led alternative. Rosen and his colleagues trained an algorithm to recognise specific combinations of transactions and company partnership structures that were often used in a specific tax dodge and unleashed it on new data.31 5. Transactional services: Personal assistants and “helperbots” As explained above, when citizens apply for something – be it a new passport or registering ownership of a property – AI can help governments automate the approval process. However, AI can also help to enhance the experience of the citizen. In recent years, governments have tried to move these transactional services online, but uptake is often low. For instance, in the UK more than 50% of vehicle tax payments are still sent by post, even though 75% of British drivers buy their car insurance online. This carries a heavy cost. The UK’s Cabinet Office estimated that a digital transaction can be up to 20 times cheaper than a telephone transaction, 30 times cheaper than a postal transaction, and 50 times cheaper than a face-to-face transaction.32 The benefits of digitisation – Cost of delivering services in the UK Sources: UK Cabinet Office, EIU One way to improve uptake is to personalise digital services. Rather than offering pages of densely written “frequently asked questions”, Singapore’s government is piloting IBM’s Watson as a “virtual assistant”, much like Apple’s Siri. It will allow a citizen to tell Watson, in natural language, exactly Service Average cost per transaction Digital take-up among users 1. Customs transactions £0.199.9% 2. Trade mark renewals £3.066.0% 3. Driving licence renewals £10.330.0% 4. Outpatient appointments £36.412.6% 5. Income support claims £137.00.0%
  • 33. Advanced science and the future of government © The Economist Intelligence Unit Limited 201630 Robots and Artificial Intelligence Genomic Medicine Biometrics what it wants to do. Watson will prompt them for more details, search through thousands of potential answers, and return the most appropriate one. According to Paul Macmillan, Deloitte’s public-sector industry leader based in Canada, such personal assistants could understand and answer everything from “Am I eligible for a pension or benefit?” to “How do I get a driver’s licence?”. The system can constantly improve its answer quality by asking the citizen if they thought their issue had been resolved. This new breed of digital service does carry the risk of exacerbating the digital divide in societies by alienating those who are not able to use digital services. In the short term, governments have responded by setting up “digital kiosks” where humans assist users to carry out the service in question. However, an alternative approach is to embed the virtual assistants offered by Watson into a helper robot that could scan people’s details and physical documents. Such “helperbots” are already being tested in the private sector. As Gerald Wang, program manager for Asia-Pacific at IDC Government Insight, pointed out, Henn na, a Japanese hotel, has used helperbots to automate its check-in process. The robots also store luggage and check room cleanliness. The process is not seamless, however. Visitors have claimed that helperbots struggle when dealing with unexpected obstacles, such as visitors forgetting their passports. However, according to Mr Wang, their introduction “shows what can be achieved”. 6. Transport and emergencies: Moderating the impact of urbanisation In 1950, only 30% of the world’s population lived in urban areas. By 2030, this will hit 60%, with almost 10% living in “megacities” of 10m or more people.33 Many urban transport systems are already creaking and require regular maintenance. In Hong Kong, AI algorithms are used to schedule the 2,600 subway repair jobs that take place every week. They do so by identifying opportunities to combine different repairs and evaluating criteria, such as local noise regulations. Today, the subway enjoys a 99.9% on-time record – far ahead of London or New York.34 The repair work is still carried out by humans. In time, robots could play a greater role. In the UK, the University of Leeds recently won a £4.2m (US$6.4m) grant to help create “self-repairing cities”, where small robots identify and repair everything from potholes to streetlights and utility pipes.35 Despite the eye-catching name, in the short term robots are likely be more useful for monitoring and assessing infrastructure rather than repairing it, given the advanced dexterity that the latter often requires.36 Urbanisation also risks exacerbating pollution, as China has borne witness to. According to a recent study, air pollution contributes to 1.6m deaths in China every year – one-sixth of all deaths in the country.37 On a given day, the severity of pollution depends on various factors including temperature, wind speed, traffic, the operations of factories, and the previous day’s air quality. In August 2015, IBM China revealed that it is working with Chinese government agencies on a programme to predict “New personal assistants could answer everything from ‘Am I eligible for a pension or benefit?’ to ‘How do I get a driver’s licence?’ - Paul Macmillan, Deloitte.
  • 34. Advanced science and the future of government © The Economist Intelligence Unit Limited 2016 31 Robots and Artificial Intelligence Genomic Medicine Biometrics the severity of air pollution 72 hours in advance. It claims that its predictions are 30% more precise than those derived through conventional approaches.38 The goal now is to expand the length of the predictions, giving the authorities more time to intervene – for instance, by restricting or diverting traffic, or even temporarily closing factories. AI can help governments create better urban-planning strategies. Using software developed by the US Defense Advanced Research Projects Agency (DARPA), Singapore is analysing huge masses of data, such as anonymised geolocation data from mobile phones, to help urban planners identify crowded areas, popular routes, and lunch spots, and to then use this information to make recommendations about where to build new schools, hospitals, cycle lanes and bus routes.39 Disaster management is another urban-planning application. In the US, the state of California is trialling AI technology developed by start-up One Concern which can predict what areas of a town are likely to be worst affected by an earthquake. The system uses data on the age and construction materials of buildings. When the early signs of an earthquake are identified, it combines this with seismic data, so that emergency resources can be targeted. Robots will increasingly work alongside humans to carry out such emergency efforts. In Japan, following the Fukushima nuclear power plant explosion of 2011, drones used infra-red sensors to survey and gather data from locations too dangerous for humans. How should governments respond? For governments, this development of robots and AI holds significant promise, but also raises challenges that need to be managed. This requires a multi-faceted response. 1. Invest in trials and manage accountability All government agencies should be asking how robots and AI – and the automation, personalisation and prediction that they offer – could enhance their work. In many cases, applications will build on what has been happening for years. For instance, police forces have long monitored areas following a robbery to prevent future incidents. Predictive policing algorithms are a logical, more sophisticated, extension of this. Expectations must be kept in check. Most trials will need close involvement from human staff initially, and will work best in narrow, tightly defined, areas (such as trying to predict burglaries rather than all types of crime). Moreover, “predicting” should not be confused with “solving”. Predicting crime and intervening to stop it does nothing to address its root causes. Predicting who is most likely to develop cancer is certainly valuable, but potential sufferers will still need to address difficult questions about how to prevent, or treat, the disease (see chapter 2). It is also critical that accountability is not automated when a task is. “Black box” algorithms whose rationale or logic are not understood will not be accepted by key stakeholders. Some AI suppliers have recognised this. IBM’s new “Watson Paths” service provides doctors and medical practitioners with a step-by-step explanation of how it reached its conclusions.
  • 35. Advanced science and the future of government © The Economist Intelligence Unit Limited 201632 Robots and Artificial Intelligence Genomic Medicine Biometrics 2. Support research and debate on ethical challenges Robots and AI give rise to difficult ethical decisions. For instance, how does AI interpreting data from surveillance cameras affect privacy rights? What if a driverless car’s efforts to save its own passenger risks causing a pile-up with the vehicles behind it? If a robot is programmed to remind people to take medicine, how should it proceed if a patient refuses? Allowing them to skip a dose could cause harm, but insisting would impinge on their autonomy. Such debates have given rise to a new field, “machine ethics”, which aims to give machines the ability to make appropriate choices – in other words, to tell right from wrong. In many cases philosophers work alongside computer scientists. In January 2015, the Future of Life Institute, set up by Jaan Tallinn (co-founder of Skype) and Max Tegmark (MIT professor), among others, to mitigate the existential risks facing humanity, published a set of research priorities to guide future AI work.40 AI companies have also set up ethics boards to guide their work, while government-backed research institutes, including the US Office of Naval Research and the UK government’s engineering-funding council, are evaluating the subject. Despite what some media might report, the main concern is not that future AI might be “evil”, or have any sentience whatsoever (ie, the ability to feel). Rather, the concern is that advanced AI may pursue its narrow objectives, even positive ones (such as passenger safety or patient health), in such a way that is misaligned with the wider objectives of humanity.41 To explain the point, a stark “paperclip” example is often used. In this scenario, an AI is tasked with maximising the production of paperclips at a factory. As the AI becomes more advanced, it proceeds to convert growing swathes of the earth’s materials, and later those of the universe, into a massive number of paperclips. Although the example is simplified, it explains a broader concern – that the narrow goals of AI become out of sync, or “misaligned”, with the broader interests of humanity. 3. Foster new thinking about the jobs challenge Trying to predict the impact of robots and AI on jobs is difficult. Participants on both sides make valid points. Positive commentators argue that past technological improvements – from the industrial revolution to the rise of the Internet – have always led to increased productivity and new types of jobs. This, in turn, has made most of society better off (even if individual groups, such as farmers or miners, have suffered). However, critics retort that past technological developments are a poor guide because robots and AI have the potential to replace a far wider set of jobs, including many skilled professions in fields as diverse as healthcare, law, and administration. Furthermore, the new technology-based firms that emerge are unlikely to be “job-heavy”. Google and Facebook employ a fraction of the staff that more traditional firms of similar sizes employ, such as General Motors or Wal-Mart. As a result, the rise of robots and AI has the potential to exacerbate two challenges facing governments: widening inequality and long-term unemployment. While productivity in a country may increase, the benefits may accrue to a narrow pool of investors, rather than to employees. In response, commentators such as Martin Ford, author of Rise of the Robots, have suggested a guaranteed income
  • 36. Advanced science and the future of government © The Economist Intelligence Unit Limited 2016 33 Robots and Artificial Intelligence Genomic Medicine Biometrics for every citizen. This “digital dividend” would be recognition of the fact that much of the new advances in robots and AI rely on research that was originally funded by governments. As Ford has pointed out, such a policy would be politically challenging in many countries. An alternative approach suggested by Jerry Kaplan, author of Humans Need Not Apply, is for governments to try and spread firm ownership more broadly by reducing the corporate tax rate for firms with a significant number of individual shareholders. This would allow individuals to benefit more from the robots and AI revolution. 4. Invest in education, but not the traditional sort The stock response to any technical challenge is to invest in education to “future-proof” a country’s population. However, no type of conventional high-school or university education can adequately prepare students for a world where robots and AI are prominent. The speed of change is too great and nobody can predict what skills will be needed in ten years’ time. Competency-based education programmes hold more promise. They focus on teaching individual skills (or competences) and can be completed at any stage in an employee’s career. They can also be quickly designed and rolled out in response to companies’ ever-changing needs. Udacity, an online-education firm, has teamed up with companies such as ATT to provide “nano- degrees” – job-related qualifications that can be completed in six to 12 months for $200 per month. Dev Bootcamp offers a nine-week course for code developers, paid for in part by a success fee. The firm charges employers for each graduate hired, after they successfully complete 100 days on the job.42 To fund such programmes, Jerry Kaplan has called for companies to offer “job mortgages”. Under this system, workers would commit to undertaking ongoing training as part of their employment contract. The cost of the programmes would be deducted from their future wages.43 5. Tackle the security issues at a global level The development of LAWS – which could select and attack targets without human intervention – is closer than most imagine. Indeed, there is a first-mover advantage in their development. Once they become relatively straightforward to produce, military powers will struggle to avoid the temptation to gain an advantage over their foes. Once one country is thought to have them, others are likely to follow suit. This has culminated in the “Campaign to Stop Killer Robots”, fronted by an alliance of human rights groups and scientists. They call for a pre-emptive ban on developing and using LAWs, in the same way that blinding laser weapons and unexploded cluster bombs were banned in the past. However, a full ban is opposed by some countries, including the UK and the US, who have argued that existing law is sufficient to prevent the use of LAWs. Others have questioned whether LAWs are ethically worse than traditional weapons. If they more accurately identify targets, while also meeting the traditional humanitarian rules of distinction, proportionality, and military necessity, this could result in fewer unintended deaths than traditional human-led warfare. As with nuclear weapons, the best long-term solution is an international agreement with clear
  • 37. Advanced science and the future of government © The Economist Intelligence Unit Limited 201634 Robots and Artificial Intelligence Genomic Medicine Biometrics provisions on what countries can do. The UN has already held a series of meetings on LAWS under the auspices of the Convention on Certain Conventional Weapons in Geneva, Switzerland. The next week- long meeting will be held in April 2016. However, controlling the development of LAWs is likely to prove more difficult than nuclear weapons, as developing them in secret will be much easier and quicker. Conclusion Robots and AI have been heavily hyped in the popular media discourse, with advocates and sceptics each presenting dramatic visions of utopian, or dystopian, futures. Yet in this polarised kind of debate, many of the nuances and subtleties are lost. On one hand, supporters tend to over-promise what their technologies can do, and often have vested interests in these technologies. On the other hand, the underlying trends that make robots and AI a reality are developing much faster than most people realise, and a tipping point in their development has been reached. The range of tasks that robots and AI will soon be able to undertake is also far beyond what many people appreciate. This could dramatically enhance the work of governments – by automating and personalising services, and by better predicting challenges before they arise. However, robots and AI also pose risks to security, employment, and privacy, and raise knotty ethical challenges that require wider debate. Governments are right to progress robots and AI trials, but must also give considered thought to the challenges they bring.
  • 38. Chapter 2: Genomic Medicine
  • 39. Advanced science and the future of government © The Economist Intelligence Unit Limited 201636 Robots and Artificial Intelligence Genomic Medicine Biometrics Executive Summary Genomic medicine uses an individual’s genome – ie, their unique set of genes and DNA – to personalise their healthcare treatment. Genomic medicine’s advance has been boosted by two major developments. First, new technology has made it possible, and affordable, for anybody to quickly understand their own genome. Second, new gene- editing tools may allow practitioners to “find and replace” the mutations within genes that give rise to disorders. Much of genomic medicine is relatively straightforward. Rare disorders caused by mutations in single genes are already being treated through gene editing. In time, these disorders may be eradicated altogether. For more common disorders, such as cancer, the response is more complex. However, patients’ genomic data could lead to more sophisticated preventative measures, better detection, and personalised treatments. Other potential applications of genomic medicine are mind-boggling. Researchers are exploring whether gene editing could make animal organs suitable for human transplant, and whether “gene drives” in mosquito populations could help to eradicate malaria. The fast pace of development has led to ethical concerns. Some worry that prospective parents may try to edit desirable traits into their embryos’ genes, to try and increase their baby’s attractiveness or intelligence. This, critics argue, is the fast route back to eugenics and governments need to respond appropriately. This chapter starts with an overview of what exactly genomic medicine is and the recent advances that have led to such excitement. It then examines three key ways in which genomic medicine could transform healthcare delivery. The chapter concludes with suggestions for government leaders on how to respond. Key definitions Biology terms Organism An organism is any living biological entity. Examples include people, animals, plants, and bacteria. Organisms are made up of cells. Cells The human body is composed of trillions of cells. They are the smallest unit of life that can replicate independently, and make up the tissue in organs such as the brain, skin and lungs. Chromosomes Most human cells contain a set of 46 chromosomes, which in turn contain most of that person’s DNA. These 46 chromosomes come in 23 pairs. One chromosome in each pair is inherited from the mother, and one from the father. DNA DNA (or deoxyribonucleic acid) is mainly located in chromosomes. It provides a set of instructions for how a person will look, function, develop, and reproduce. DNA is made up of sequences of four chemical “blocks”, or bases: adenine (A), cytosine (C), guanine (G) and thymine (T).
  • 40. Advanced science and the future of government © The Economist Intelligence Unit Limited 2016 37 Robots and Artificial Intelligence Genomic Medicine Biometrics Genes Genes are individual “sequences” of DNA that determine the physical traits that a person inherits and their propensity to develop certain diseases. For example, the sequence ATCGTT might be an instruction for blue eyes. Individuals inherit two versions of each gene – one from each parent. Proteins Some genes instruct the body on how to make different proteins. Proteins are necessary for an organism to develop, survive and reproduce. For instance, the BRCA1 gene is known as a “tumour suppressor” because it can instruct a protein to repair breast tissue. Genome A genome is an organism’s complete set of DNA. For a human, it contains around three billion DNA letters (or bases), and around 20,000 genes, located on 46 chromosomes. Most cells in a person’s body contain the same, unique genome. Genome sequencing Genome sequencing is undertaken in a laboratory, and determines the complete DNA sequence of an organism – ie, how the DNA “blocks”, or bases, are ordered. Genetic variation 99.9% of DNA is identical in all people in the world, regardless of race, gender or size. However, the remaining “genetic variation” explains some of the common differences in appearance, disease susceptibility, and other traits. Gene mutations Gene mutations occur when an individual’s gene becomes different to that which is found in most people. These mutations could be inherited from one’s parents, or they could develop due to environmental factors such as exposure to toxins. Gene disorders Up to 10,000 diseases, called monogenic disorders, are caused by a mutation in a single gene. Other, more complex disorders, such as most cancers, are caused by a combination of multiple gene mutations and external factors such as diet and exposure to toxins. Gene therapy Gene therapy replaces a mutated gene with a healthy copy, or introduces a new gene to help fight a disease. A new technology, CRISPR-Cas9, allows gene mutations to be “edited” and replaced with “correct” genes. However, it has not yet been used on humans. Gene disorders Cystic fibrosis Cystic fibrosis is a life-threatening disease. A mutated gene causes a thick build-up of mucus in the lungs, pancreas, liver, kidneys, and intestines. The build-up makes it hard to breathe and digest food, and leads to frequent infections. Familial hypercholesterolaemia A condition that causes patients’ “bad cholesterol” levels to be higher than normal, increasing the risk of heart disease and heart attacks at an early age. Haemophilia Haemophilia is a group of disorders that can be life- threatening. Mutated genes mean that a patient’s blood does not clot properly, potentially leading to excessive bleeding. Internal bleeding can damage key organs and tissues. Huntington’s disease Huntington’s disease causes the progressive breakdown of nerve cells in the brain. Over time it increasingly affects a patient’s movement, cognition and behaviour.
  • 41. Advanced science and the future of government © The Economist Intelligence Unit Limited 201638 Robots and Artificial Intelligence Genomic Medicine Biometrics Mitochondrial diseases Mitochondrial disease is a group of disorders that are caused by mutations in mitochondrial DNA, which converts food into energy. Symptoms include loss of muscle coordination, learning disabilities, heart disease, respiratory disorders, and dementia. Muscular dystrophy A group of conditions that gradually cause the muscles to weaken, leading to an increasing level of disability. One of the most common forms is “Duchenne muscular dystrophy”; men with the condition will usually live only into their 20s or 30s. Maple syrup urine disease (MSUD) A condition caused by a gene defect which prevents the body from breaking down certain parts of proteins, leading to a buildup of chemicals in the blood. In the most severe form, MSUD can damage the brain during times of physical stress (such as infection, fever, or not eating for a long time). Sickle cell disease A group of disorders that cause the red blood cells to become rigid and sickle-shaped, in contrast to normal red blood cells, which are flexible and disc-shaped. The abnormal cells can block blood vessels, resulting in tissue and organ damage and severe pain. One positive effect is that sufferers are protected from malaria. Severe combined immunodeficiency (SCID) A group of potentially fatal disorders in which a gene mutation results in patients being born without a functioning immune system. This makes them vulnerable to severe and recurrent infections. Tay-Sachs disease A fatal disorder that primarily occurs in children. It occurs because a mutated gene can no longer produce a specific enzyme, resulting in a fatty substance building up in brain cells and nerve cells which destroys the patient’s nervous system. Thalassaemia Thalassaemia is a group of disorders in which the body makes an abnormal form of haemoglobin, the protein in red blood cells that carries oxygen. If left untreated, it can cause organ damage, liver disease, and heart failure. Source: EIU, Genetic Home Reference, NHS
  • 42. Advanced science and the future of government © The Economist Intelligence Unit Limited 2016 39 Robots and Artificial Intelligence Genomic Medicine Biometrics Background In May 2013 the actress Angelina Jolie underwent a much-publicised double mastectomy. She did so after being informed that she has a faulty version of the BRCA1 gene, giving her an 87% chance of developing breast cancer.1 A year later, researchers found that the number of women in the UK who had undergone BRCA testing had doubled – a phenomenon now referred to as the “Angelina Jolie effect”.2 Although she may not have planned it, Ms Jolie gave a significant boost to the field of genomic medicine – broadly defined as using an individual’s genomic information (or DNA) in their clinical care. In recent years, the field has enjoyed landmark breakthroughs that could lead to a revolution in how diseases are diagnosed, treated, prevented, and even eradicated. What is genomic medicine? A move away from one-size-fits-all If a person feels unwell they typically visit a doctor, usually after their symptoms become prolonged or pronounced. If they are lucky, they will make that visit in good time and the doctor will make a diagnosis by checking their symptoms and asking questions about family history and lifestyle. For more complex conditions, various tests and the involvement of specialists may be needed. Once diagnosed, the patient will embark on a treatment plan. While the impact of a disease can feel very personal, the treatment usually isn’t. Patients receive treatment based on nationally accepted procedures and protocols. If the first treatment doesn’t work, a different approach is tried – an iterative process that can turn into a race against time for more serious diseases. Genomic medicine (sometimes referred to as personalised medicine or precision medicine) promises a more nuanced approach. It takes as its premise the fact that each person’s biological make-up is unique and so their diagnosis and treatment should be as well – using that individual’s unique genomic information, or DNA. Its advance has been boosted by two major developments. First, the human genome was successfully sequenced in 2001. This provided a set of “blueprints” for how the human body works. It was followed by rapid technological advances over the next 14 years that made it possible for anybody to quickly and cheaply sequence their own genome. Second, sophisticated “gene-editing” tools have been developed that can potentially modify faulty genes and help treat, or prevent, certain diseases. The mapping of the human genome and the rise of “next-gen” sequencing In 2001 the Human Genome Project (HGP), the world’s largest collaborative biological research project, announced the first ever successful sequencing of the human genome. In terms of scale, this has been likened to an “internal” voyage of human discovery on a par with the external voyage that brought man to the moon.3 Genomic medicine has been boosted by the huge fall in sequencing costs, and the rise of new gene editing tools.
  • 43. Advanced science and the future of government © The Economist Intelligence Unit Limited 201640 Robots and Artificial Intelligence Genomic Medicine Biometrics A person’s genome is their full set of DNA. It is located in most cells in their body and is packaged into 23 pairs of chromosomes. One chromosome in each pair is inherited from the person’s mother and one from the father. Each chromosome is made up of individual sequences of DNA, called genes. Much like how letters in the alphabet are arranged to form words, four basic blocks (A, C, G and T) of DNA are arranged to form genes. The HGP discovered that a person has approximately 20,500 genes – far fewer than previously thought. It also identified these genes’ locations and how their basic blocks are ordered.   This is important because a person’s genes determine their traits – ie, how they look and function. They also determine that person’s propensity to develop certain diseases. This is because genes provide instructions to the body for how to make proteins. Proteins, in turn, carry out key functions such as repairing cells or protecting against infections. If the basic blocks in an individual gene become garbled or “mutated”, it may be unable to produce the protein needed. For instance, the BRCA1 gene is known as a “tumour suppressor” because it can repair mutated DNA in breast tissue. If it becomes damaged by a mutation (as in Angelina Jolie’s case), the risk of breast cancer increases. 99.9% of the DNA in human genomes is identical across all people. However, the remaining 0.1% (ie, a person’s gene variations) is of huge importance. Some of these gene variations, or mutations, are inherited from our parents, while others are developed over time (due to smoking or exposure to toxins, for example). Some mutations are harmless, while others are associated with diseases. The impact of many remains unknown. The HGP provided a vital starting point to the understanding of our genes. In the decade since it concluded, scientists launched a set of sequencing projects to understand how genes (and gene mutations) vary across different people. For instance, how do the genes of Alzheimer’s patients differ from those who do not have the disease? How do the genes of patients that respond well to treatment differ from those who do not? Initiatives to sequence genomes around the world Source: EIU 1990-2003 2005- 2008-2015 2013- 2013- 2013- Human Genome Project An international research collaboration to carry out the first ever sequencing of the human genome. An international research project that sequenced more than 2,500 genomes and identified many rare variations. The Harvard-led project aims to sequence and publish the genomic data of 100,000 volunteers. A 4-year project led to sequence 100,000 genomes from UK NHS patients with rare diseases and cancers, and their families. A project to sequence up to 500 individuals from Qatar, Bahrain, Kuwait, UAE, Tunisia, Lebanon, and KSA. A 5-year project to analyse more than 20,000 Saudi genomes to better understand the genetic basis of disease. Personal Genome Project 100,000 Genomes Project Saudi Human Genome Program Genome Arabia 1,000 Genomes Project Start date
  翻译: