Yuval Noah Harari and Fei-Fei Li on Artificial Intelligence: Four Questions that Impact All of Us

More questions than answers were generated during a recent conversation at Stanford University between a pair of Artificial Intelligence giants — Yuval Noah Harari and Fei-Fei Li. Nicholas Thompson, editor in chief of WIRED, moderated the 90-minute conversation in the packed Memorial Auditorium, filled to its 1705-seat capacity.

The purpose was to discuss how AI will affect our future.

Harari, a history professor at the Hebrew University of Jerusalem and two-time winner of the Polonsky Prize for Creativity and Originality, is the author of international best sellers “Sapiens: A Brief History of Humankind” and “Homo Deus: A Brief History of Tomorrow.”

Li is a renowned AI researcher, engineer and computer science professor. One of the most prolific academics in Artificial Intelligence today, her work in deep learning and computer vision is used by companies and research groups around the world. She is most known for her role creating ImageNet, a 14- million-image hand-annotated data set used extensively in computer vision applications.

They touched on some of the most important topics about AI and technology, including whether we can still believe in human agency; what democracy looks like in the age of AI; and whether AI will ultimately hack or enhance humanity.

Rather than drive us to stagnant speaking points, Li and Harari challenged us to contemplate many important questions characterizing consequences of Artificial Intelligence technology on individuals, including freedom and choice, and the impact of AI on the legal, economic and political systems of our world.

These four relevant questions attempt to help untangle the AI’s impact on the individual:

Like many who saw it, I came away from the talk with a sense of urgency. These are poignant questions that AI practitioners, policymakers and the public should be thinking about. All are a consequential part of the AI debate.

But we need to act quickly. “The engineers won’t wait. And even if the engineers are willing to wait, the investors behind the engineers won’t wait. So, it means that we don’t have a lot of time,” Harari warned.

Agreed.

The discussion jumped into the deep, difficult topic of free will and agency, skipping superficialities completely.

An argument questioning the validity of free will seems at first brush like an extraneous, theoretical endeavor — something quite outside the scope of the engineering discipline. Indeed, many of the challenges discussed circle back to topics debated by philosophers for millennia.

Only this time, there’s an entirely new angle: technology has advanced to the point where many of our closely held beliefs are being challenged, as Harari notes, “not by philosophical ideas, but by practical technologies.”

Harari has criticized the central notions of free will and individual agency for most of a decade.

He’s not alone. Numerous neuropsychological experiments have been waging a new assault on free will, thanks to advancements in the technology available to measure neural activity.

This has lead many top neuroscientists to doubt the freedom of our decision-making.

But while the science is still maturing, the consequences of our free will being manipulated — Harari called this “Hacking Humans” — pose a great risk within our society.

An organization may endeavour “to create an algorithm that understands me better than I understand myself, and can therefore manipulate me, enhance me, or replace me.”

It will be our challenge to decide not only what these manipulations, enhancements or replacements should be, but also who should be making the decisions about them in the first place.

We might wonder how we want to make choices about the possibilities for human enhancement.

“Who decides what is a good enhancement and what is a bad enhancement? So how do you decide what to enhance if, and this is a very deep ethical and philosophical question — again that philosophers have been debating for thousands of years — [we don’t have an answer to the question] ‘what is good?’ What are the good qualities we need to enhance?” Harari asked.

It is natural for many of us to “fall back on the traditional humanist ideas” that prioritize personal choice and freedom. However, he cautioned: “None of this works when there is a technology to hack humans on a large scale.”

If the very idea of human agency and free will is under debate it becomes very difficult to determine how to decide what technology should be allowed to do. This affects every area of our lives too — what we choose to do, what we might buy, where we might go, and how we might vote. It remains unclear who should be making decisions about technology’s scope at all.

This ambiguity leaves us face to face with a significant issue, thanks to the co-development of technology in biotechnology (B), computing power (C) and data analytics (D). These three items, according to Harari, can already be used to Hack Humans (HH).

For the math-minded among us he summarizes it as B * C * D = HH.

With modern technology, Hacking Humans may become a very real possibility.

“This is the moment to open the dialogue, to open the research in those issues,” Li added.

If manipulation is present, how are the systems of government, commerce, and personal liberty still legitimate?

If humans can be “hacked” and our behaviour and beliefs manipulated, what is the limit of this subtle control?

We may accept that we can be manipulated in small endeavors — Who doesn’t have a sudden craving for a cinnamon bun when walking into a bakery where they have just been made? — but surely there must be limits to the ways in which our behaviour can be controlled.

At this point, no one seems to know for sure what these limits of manipulation may be.

However, the tactics of manipulation certainly are known. Criminals and con artists using them have been in equal measures venerated for their boldness and reviled for their predacity through the media, and their stories told in cinema, literature and television.

Collectively, we disbelieve our own personal susceptibility to manipulation. Rather, we assume those who have been manipulated are the stupid few. “The easiest people to manipulate are the people who believe in free will, because they think they cannot be manipulated,” concluded Harari.

Weaponizing love as a potential for manipulation is not only possible, but well-documented. This theme is consistent with the long history of romance scams; many of us have heard of the “long distance lover who needs a sudden influx of money for some minor emergency.” Romance scams are the most “successful” of all types costing Americans $143 million last year.

Columbia Psychologist Maria Konnikova and author of The Confidence Game, reminds us that manipulation “is accomplished, first and foremost, through emotion.” This leaves us in a vulnerable state since “feeling, at least in the moment, takes over from thinking.”

After all, the manipulation system — Artificial Intelligence or not — doesn’t have to experience love in return to manipulate the human capacity for connection and intimacy with another. “To manipulate love is not the same thing as to actually feel it,” Harari explained.

Without diminishing the importance of human love, the biological and neurochemical component has been well-studied.

Given the ever greater amount of information each of us is providing, the deeper understanding we are gaining about our own biology, combined with the lower costs to analyze large amounts of data, the possibility for even costlier scams of this type cannot be ignored. These scams play on the very real and very human emotions of loneliness, isolation, and desire for a connection to another.

We are all susceptible to this kind of manipulation. “We want to believe in what they’re telling us,” Konnikova reminds us.

Few have a definitive view on the limits of what might be possible with the advances in data science and technology. Li is optimistic. “I do want to make sure that we recognize that we’re very, very, very far from that. This technology is still very nascent.” However, the stakes are getting higher and higher, and if that is the case at present, how long will it remain so?

As Li commented: “I think you really bring out the urgency and the importance and the scale of this potential crisis. But I think, in the face of that, we need to act.”

For millennia humans have been outsourcing some of the things that our brains do. Writing allows us to keep precise records instead of relying on our memory. Navigation moved from mythology and star charts to maps and GPS.

But with AI we have a radical opportunity: What if self-awareness is one of the things humans will outsource to technology?

Harari recounted a personal story about his own journey of self-discovery, confessing that he was unaware that he was gay until he was in his twenties. This revelation prompted a thought-provoking moment that showed how much we all struggle to see our own blind spots. “I only realized that I was gay when I was 21. And I look back at the time and I was I don’t know 15, 17 and it should have been so obvious.”

Harari continued: “Now in AI, even a very stupid AI today, will not miss it.”

This opens up a very interesting new possibility: Might algorithms know things about us that we don’t yet know?

In the history of AI, this has been a tempting thought for decades.

Even today, using the data we provide, it may be possible to diagnose a myriad of different conditions, from depression to cancer, earlier and meaningfully impact our lives.

Beyond our physical and mental health, it is provocative to wonder what else large-scale analysis might unlock as a result of the data we now provide. After all, there are facets of the human experience that are immutable despite culture, generation, and station.

As the analysis methods become more advanced and the available data increases, what experiences might we learn that we share with our friends, our neighbors, folks who live across the globe whose life hardly resembles our own?

Two challenges, however, remain.

There is a danger in giving an algorithm, even a very smart one, the authority to give us information about ourselves. Especially so if it makes it difficult to question and validate.

If an algorithm predicts that we have cancer, we can get tested, but if it tells us something much more ambiguous, say if we are popular (or not) we may be inclined to take it as true because we have no way to validate it. This in turn might lead us to make different decisions thanks to a misplaced trust in a potentially faulty algorithm.

Fry notes that we might believe what algorithms are saying to such an extent that it overrules our personal judgment.

She relates the story of a carload of tourists “who tried to drive through water to get to a destination they were really interested in visiting. They did not overrule the navigation and had to be rescued.”

Who will be there to rescue us if our very self-perception has been forced askew?

Furthermore, using data to connect with experiences of others and to ourselves is a separate issue to having algorithms know deeply personal things about us that may be shared with other actors instead of us.

“What happens if the algorithm doesn’t share the information with you, but it shares the information with advertisers? Or with governments?” wondered Harari.

Even now, our information on social media is used to serve us “relevant” ads, and we are only beginning to find out who is paying for us to see them.

“This is a good example, because this is already happening,” said Harari.

AI has been used to predict if we will quit our job or break up with our significant other. Both are deeply personal decisions that many of us would be reluctant to want personal friends, much less impersonal organizations, to know.

Li is skeptical that an algorithm might be able to surpass our own introspection this way. “I’m not so sure!” she said, giving us hope that we may be able to thoughtfully tackle some of these challenges while there’s still time.

“Any technology humanity has created starting with fire is a double-edged sword. So it can bring improvements to life, to work, and to society, but it can bring perils, and AI has these perils,” she reminded us.

With the discussion weaving through so many pertinent disciplines, Li offered us a deft, albeit open ended, proposal to begin to tackle the many issues we face: to reframe Artificial Intelligence in a human-centered way.

Li has begun the change at Stanford University with a great aim, and one that provides a functional template for all organizations, regardless of size and of source.

She has established the Stanford Human-Centered AI (HAI) Institute which will bring together individuals from many different areas for a new collaborative dialogue.

The Institute has three tenets:

“We’re not necessarily going to find a solution today, but can we involve the humanists, the philosophers, the historians, the political scientists, the economists, the ethicists, the legal scholars, the neuroscientists, the psychologists, and many more other disciplines into the study and development of AI in the next chapter,” said Li about the Institute.

This recommendation stems from the challenges researchers and practitioners face in gaining and keeping public trust, providing a positive user experience, and replacing the fearmongering in AI with well-thought-out policy recommendation.

Defining clear goals within the Artificial Intelligence community is a major step towards a hub we can all rally around, and the crossover between various disciplines is gaining increasing traction.

“This is precisely why this is the moment that we believe the new chapter of AI needs to be written by cross-pollinating efforts from humanists, social scientists, to business leaders, to civil society, to governments, to come at the same table to have that multilateral and cooperative conversation,” Li stressed.

But we’ve come to a crossroads.

Indeed, many of the ethical questions that we face today are the results of decisions that have been made by engineers: the ethos of “move fast and break things” has ultimately resulted in things really breaking.

Working in tech can blind its creators from the effects of the technologies that they build. There are myriad unintentional outcomes: large online retailers crowding out small business and changing the composition of our cities, to name just one.

How can we balance our desire for innovation with the risks that come with it? When companies succeed without the slowdown that comes with being deliberate and thoughtful about their AI offerings, should we take measures to curb their growth?

Li is optimistic about the inclusion of ethics into the software discipline.

“Human-centered AI needs to be written by the next generation of technologists who have taken classes like [Stanford Political Science Professor] Rob [Reich]’s class [Computers, Ethics and Public Policy], to think about the ethical implications, the human well-being.”

However straightforward this goal might be, many in the AI community might wonder whether it might also be the most challenging.

“We cannot possibly do this alone as technologists,” Li warned.

How do we convince the heavily technical folks working in Artificial Intelligence, individuals who may not want to concern themselves with nebulous subjects like societal effects of their work, that they should care about these things? And further, should that be the expectation? Do we require an ethical dimension to every role throughout the industry?

Li is not so sure.

“Some of us shouldn’t even be doing this. It’s the ethicists, philosophers who should participate and work with us on these issues.”

Although very few forward-looking people working in the industry would dismiss its importance, the paradigm shift it requires should not be minimized. There has historically been a great deal of disdain within the tech community for any non-technical or tech-adjacent subject. Will the AI community respect the importance of these new perspectives or will we roll our eyes at anyone who doesn’t understand backpropagation?

Even Li quipped, when asked whether Harari’s work is on her syllabus, “Not on mine, sorry. I teach hardcore deep learning. His book doesn’t have equations.”

The conversation was well-timed to ask important new questions about the ways in which Artificial Intelligence may affect us as individuals in the coming decades. To mitigate the potential of “Hacking Humans” without a good understanding of the limits of the possibility of manipulation, Harari urged us to focus on self-awareness:

“It’s the oldest advice in all the books in philosophies is know yourself. We’ve heard it from Socrates, from Confucius, from Buddha: get to know yourself. But there is a difference, which is that now you have competition…You’re competing against these giant corporations and governments. If they get to know you better than you know yourself, the game is over.”

But a collaboration is needed, as suggested by Li. This work is starting to take shape within numerous organizations around the world.

The conversation between Harari and Li marks the beginning of a new type of work in AI.

“We opened the dialog between the humanist and the technologist and I want to see more of that,” she said.

So do I.

About the Author:

Briana Brownell is a data scientist turned tech entrepreneur, futurist and innovator. Currently, Briana is Founder and CEO of PureStrategy.ai, a technology company that creates and deploys AI coworkers into the enterprise so employees can make faster, data-driven decisions. A frequent keynote speaker, expert and author, she is known for making highly technical topics accessible to non-experts, as well as leading a thoughtful technical discussion on the science behind AI.

For more, please see here, here and here.

Yuval Noah Harari and Fei-Fei Li on Artificial Intelligence: Four Questions that Impact All of Us

Research & References of Yuval Noah Harari and Fei-Fei Li on Artificial Intelligence: Four Questions that Impact All of Us|A&C Accounting And Tax Services
Source

error: Content is protected !!