ArticlesBlog

“Artificial Intelligence in the Marketplace” with David Parkes & Iyad Rahwan

“Artificial Intelligence in the Marketplace” with David Parkes & Iyad Rahwan


Okay well, where should we start? Maybe we
should start at the very beginning, in 2003, which is how long we’ve known each other for.
You reminded me of that earlier, that’s right. It’s been a long time, yeah.
I’m sure you remembered it before I reminded you. So it was in Melbourne, Australia, I
was doing my PhD there, therefore I was one of the minions doing the local organization
for the conference, the little people. But you were one of the big people who was invited.
I think you were one of the keynote speakers there and you were a rising star.
I remember being shocked when I was invited to do that.
Well, it was a good shock I hope. It was a nice shock. I even remember still
the topic of the lecture that I gave. It was about mechanism design and computational complexity.
I remember that it was just such a great opportunity to speak to that audience about the work I’d
been doing at that point. Okay. And by taming computational complexity,
do you think … It’s interesting because now you’re thinking about taming computation
in maybe a different way, so it’s about taming AI’s from becoming too powerful or having
too much control over our lives. Do you think there are connections between those two?
This work on mechanism design theory, which as you know goes all the way back into economic
theory from the 1960s, I always thought that the interesting hook into it from a computer
science perspective is it’s a way to enable AI’s to lead to good outcomes even if the
AI’s are individually rational and trying to optimize on their behalf of the behalf
of the person who they represent and we know that it’s really challenging to reason strategically
in situations of interactions. And so a beautiful thing with mechanism design is that you can
design a game that is very simple to play. Removing the computational complexity was
to say, you’re a rational agent, but I’m gonna make it really simple for you to know what
to do. And in particular, what you should do to be
rational is just to describe your preferences, your goals, how you think about the environment
that you’re in. And that’s always what I’ve thought is just so beautiful about the idea
of mechanism design. So I think it’s, in a way, I really like that
perspective. I think in a way it’s a good way of thinking about what’s happening today,
because today we feel like we’re kind of beginning to lose control because there’s these machines
that have access to vast amounts of data, vast computational power, and that they can
kind of out-compete us in some domains because they can make better decisions, more informed
decisions than mere humans, at least in some domains. And what you’re saying is that if
you design the institution correctly, you could potentially eliminate that advantage.
So do you think that mechanism design could be, in some way, a solution to this AI, runaway
AI problem? That’s a very interesting point. So I think
that it could possibly be a solution there. There’s a parallel. When people think about
designing mechanisms for matching high school students into high schools, there’s an argument
that’s made for strategy-proofness, which is this property that you cannot game, it’s
not useful to game, and the argument- Which means that if you’re a smart decision-maker,
smarter decision-maker than somebody else, you’re not going to get any advantage.
And there’s an equity argument made which is that the children from lower socioeconomic
groups, if the mechanism was not designed in this way, they may not know because of
the groups that they’re in, their peer group may not know to misrepresent their preferences
for schools in just the right way and then they may not gain an advantage and they may
actually be at a disadvantage to the other students. And so there in that setting, this
very public, very current setting, it was argued, and it was a compelling argument,
that said we should design the rules such that it levels the playing field across participants.
So I think it’s an interesting way to think about things.
I also, I don’t really like to think about the viewpoint that it’s machines with lots
of information and lots of computational capabilities kind of competing with people. I always like
to really think more about machines representing people.
Yup. And making people’s lives better by well-representing
what people want to achieve. I agree with you, but I guess my gut feeling
is that that’s nice in some domains, but in other domains it may be very difficult to
make sure that different people have equal representative power in the form of machines
because some people can afford more powerful machines, can afford to buy data that they
can use to train the machines more than others. So if you think of, maybe let’s think of algorithmic
training on markets as such an example. So, you’ve got different hedge funds, investors
who can afford more expensive cables to plug into the market and more expensive machines
that can calculate strategies faster and model other people’s machines faster. So in way,
it is not necessarily equal and presumably the hedge fund with the most money would be
able to afford, also the smartest programmers and so on, to be able to out-compete the others.
But even there, we interviewed Michael Wellman earlier and one of his ideas is this discrete
time trading where- Discrete time, which is a nice way to address
that. … to say that trading is going to clear
at every second or every 10 seconds or whatever so that it doesn’t pay to be faster and to
have a faster fiber optic connection that levels the playing field. So that’s kind of
an example. That’s exactly right. It’s an example from
finance where you level the playing field by making everybody have to go slowly.
Yup. And making there not be a last move advantage.
There is no last move because everybody moves at the same time, essentially.
Yup. But finance is a very extreme case. I mean,
yes I agree, in finance there’s a lot of money to be made. Therefore, a lot of sophistication,
a lot of data, a lot of computational power has led to an arms race, has led to he, the
firm with the most power generating out-sized profits and then being this kind of competition,
but it’s a question about whether what will democratization look like when AI is more
ubiquitous? I mean, there’s another view that says that essentially the software, the algorithms,
will be cheap, will be widely available, will be widely understood and everybody just like
most people in the US at least have a smartphone, increasingly in developing countries too.
Everybody’s smartphone will be equipped with a reasonably good AI.
Now data is where the power lays and presumably the data, I think if we keep on the path we’re
going on, it looks like the data will be in the hands of a few big firms, and then the
question is how are they controlling their data and for whom do they make the power that
comes from that data available. That’s right.
A very interesting question, to think about what’s gonna happen there. But I do think
that algorithms and the methods of AI should be readily available for everybody, it should
be able to lift up everybody’s lives, to help everybody make better decisions about healthcare
or education, for example. Okay. Yeah, I hope that it’s true. I very
much hope so, but my mind always goes to politics and how today let’s say people are training
models that will predict, that will very effectively influence people’s political preferences in
very indirect ways by just deciding what content to show them or what news to show them or
what feeds. If this becomes commoditized and then becomes available, then it’s really the
political party that has the most money will be able to convince the most people. Which
is kind of what happens today already and it’s just the same kind of game. It’s not
necessarily going to … Of course anybody can go online and buy, purchase some ads,
right? But it’s a coordinated effort of what large players that can then utilize this market
to do persuasion en masse for one particular candidate.
I agree with you that that’s happening today, but again I’m gonna try to paint what I think
is a more positive version of what could happen in the future. I think you could argue and
might argue that the reason that’s powerful today is because we all have very limited
attention. We’re having to make our own decisions, we’re making our own decisions based on very
small amounts of information. We are being influenced in all kinds of ways, not only
through political ads, but through the things that our friends say to us or the messages
that fly by on Facebook. Think about a different world where you have an Iyad bot and your
Iyad bot knows and understands you extremely well ’cause it’s lived with you your whole
life, understands what you passionately, internally, inherently care about. What is it that drives
you? What do you care about, what’s important for your family, how do you think about the
world? And because it’s AI, it doesn’t have the information processing limitations that
people have and therefore it’s not subject to manipulation through adverts. It’s a rational
information processor. Sounds like a lot better world to me than the current world.
Now I see because what you’re saying is if we are as voters, citizens who vote, if we
are augmented with our decision making for our small contribution to the political process,
if we’re augmented with something even more rational that can process vast amounts of
data, identify do we care more about politicians being consistent in their promises or do we
care more about their personal beliefs, for instance, or what group they belong or whatever.
Their integrity, the circles they keep, the access to money they have, the friends they
have. Yeah, I guess then I would ask who wrote that
AI, who write the Iyad bot and that’s a very important question.
I agree with you. I think that I posited in my response that you have this Iyad bot that
somehow is a very good representation of what’s important to you, so I think that raises a
profoundly interesting question, which is how can you verify, validate, trust, believe
that this software that’s probably living in the cloud, it’s in the cloud, it’s on your
phone, I’m not sure where it is, but that the software really is representing who you
are and what you care about. It sounds like a profoundly important question and I think
a challenging question. So, okay, that gets me thinking about the
following question because on one hand you’re saying if we could design institution so that
the rules almost make it irrelevant how much computational power we have and how much our
AI’s have, right? The AI that’s representing me. All I need to know, all I need to specify
is my preference somehow. On the other hand, figuring out what my preferences are itself
is a computationally challenging problem as well. So, do you think that the solution is
going to be a combination of those two? The right mechanism that incentivizes the right
kind of honest computations about my preferences or do you think they’re kind of independent
problems? I think that the politician is trying to put
him or herself forward in a way so that you develop your preferences. So, you don’t understand
your preferences because you don’t know everything about that individual, you don’t understand
their policies. Okay, so there are two parts of preferences. There’s understanding what’s
important to you and then there’s evaluating an option in reference to what’s important
to you. Those are both difficult. It’s a lot of research into understanding latent preferences
versus revealed preferences, shoulds versus wants. I should go to the gym, but guess what,
I’m gonna watch Netflix instead. That I watch Netflix, does that reveal that my kind of
true, inherent preference is to be watching Netflix, not going to the gym or not? There’s
a lot of debate about this. If your AI is watching you for your whole
life, how exactly is it going to really understand what’s important to you? Should it be kind
of factoring that maybe some things you’re doing are based on your behavioral biases
or should it not? Should it say, “No, no, no. The way he acts is how he really thinks
about the world?” So this is part of the challenge, but I wanna say yes, I think the mechanism
design piece, important as it is, is only going to remove the game theory, is only going
to remove the counter-speculation which is the piece that happens, in a sense, knowing
our preferences, knowing how we think about the world. It doesn’t remove that piece.
Okay. But it does seem that it kind of almost goes all the way down to free will because
if the AI knows what I really want even though my preferences, my real preferences, don’t
reflect that in some way, but the AI somehow knows that my true self really wants to go
to the gym or my true calling. It raises a question about who am I then? Am I the person
who’s living or am I the person who’s hidden who only the AI knows?
Correct. I think his gets to agency. I think that one of the things that we hold most important
as individuals is our own agency. And I think that probably the types of AI’s that are adopted
and enjoyed and successful are AI’s that help us in lots of ways, but still leave us with
agency. Yup.
Without agency then why are we living if we don’t get to make choices that we think are
important to us? So I agree with you. There are of course today commitment devices that
we can choose to opt into that will encourage us to go to the gym rather than watch Netflix.
Like if I don’t go to the gym, some money from my account will go to a charity I hate
or something like that. A charity I hate or I have a way to commit
such that I only watch the show I really like to watch when I’m at the gym. So I couple
a thing I like with the thing that I don’t like. Things like this.
Yup. And you’ve kind of put explicit constraints. You choose to put explicit constraints on
your own behavior. Achieve to constrain yourself. But I do agree.
I think that it’s not clear what preferences an AI should adopt on your behalf. And I think
this gets to the control issue. I really like this view of a partnership between a person
and an AI and I think it’s very important that that is a true partnership and that you
should be able to control what’s important to you about the AI if it’s helping you.
I agree with you because I think if we don’t have that, then I might as well be switched
off and the AI just continues. We don’t want to switch you off.
Thank you. So, I wanted to take a step back maybe and ask you a little about the evolution
of the study of AI a little bit because you’ve occupied sort of an important position. Despite
your young age you were involved at the Harvard computer science department for a while now,
since back in 2001. And you held a number of important positions including the area
dean for computer science and you were directly responsible for hiring a lot of the faculty
at the department and kind of growing it to a very successful department, one of the top
in the world. So, I’m interested in your reflection on the evolution of the different topics that
were considered very important and very crucial for a computer science department to have
when it comes to AI. Right. Well it has been remarkable to see.
I think that as a field, AI, 20 years ago there were brilliant people asking more theoretical
questions. What is intelligence? How do we go about even conceptualizing how to start
to think about building intelligent systems? Kind of really detailed, beautiful examples
of ways in which things would fail to be intelligent. I think that over the 20 years we’ve moved
to a place where AI now is rapidly becoming a big science and big engineering field where
as we get closer, I would say inevitably as we get closer to human level intelligence,
I think that once of the very clear ways to be effective in AI now is to be able to work
at a different scale, to be able to work with data certainly, to be able to pull together
the different pieces that you need such that you can focus on the part of the AI question
that you find interesting. So for example, common sense reasoning is
important, planning is important, learning is important, natural language is important,
multi-agent interactions are important. How do we study and advance any one of these?
Is it possible to do it without having kind of access to some of those other capabilities?
I think this is one challenge I see for how to keep advancing AI at least within the economy.
I’m reminded a lot of the examples that were not just in textbooks, but also in the research
papers, in the early 2000s and late 90s where it’s the good old blocks world, a little agent
moving blocks on a table so that it can move a bunch of blocks or subject to some constraints
or little mazes and it’s super simple interactions. The world is basically a bunch of pixels and
a bunch of squares and there’s like four things you can do at any given point. You have the
agents, AI agents have full view of the state of the world, so everything was simpler. And
this was not just in textbooks, this was cutting edge science being published, discussed at
conferences. So you’re saying that now there’s no way these things would fly. We’re at much
more, now the agents will, maybe they’re implementing a trading strategy based on some kind of learning
process, optimization, but also Twitter feeds about public sentiments or stocks and whatnot
and that’s an example where you’re combining a whole bunch of things and there’s many more
examples. Is that what you mean? Yes, although I think it’s important that
we can still do the narrow, very theoretical research as well. But I do think that one
of the changes I’ve seen over the years and in part, of course, the past five years we’ve
had this explosion around data and around deep learning, I think that there’s this question
as to what do you need to be effective in terms of actually having impact in advancing,
truly advancing the field at the moment. And the thing that I see is that as we go forward
the next five, 10, 15, 20 years, it seems like as we get towards more and more sophisticated
AI, it raises the question do we need some open infrastructure to really make it easy
for any one research lab to be able to be effective, even though any one research lab
cannot possibly have competencies in all of the important areas.
So do you think, how do you think this would be enabled? Because at the moment it seems
like big corporate labs really have the, are best equipped to do this, right? And some
of the successes recently of DeepMind seem to be stemming from their ability to bring
in really well-compensated and very technically skilled people to do kind of big engineering
projects where they combine a lot of know-how from people who really know how to optimize
some processing functions and scale them up, people who really do like extremely specialized
matrix computations and people who can put all of these things together and that seems
to be giving them their edge. Do you think this can be reversed if we have kind of a
more open infrastructure? Is that [crosstalk 00:25:49]?
I hope so and I think that we need to, we really need to find a way such that universities
can work with corporations. AlphaZero, the recently advance in being able to train AI
to play Go and other games like chess, to me the step from Alpha-
AlphaGo to AlphaZero. AlphaGo to AlphaZero, to me this stuff didn’t
sound so profound, but I realize that there’s a narrative around that step, which is this
narrative around are we always succeeding in AI by learning to mimic how people play
where AlphaGo was leveraging in parts, there was a lot of self-play, but there was also
a lot of leveraging in part a huge database of how Go experts play, to this place where
now it’s purely back to the [inaudible 00:26:52] self-play, here’s how you can train up and
AI to play a game as complex as go, which I remember learning about when I was in grad
school. I remember people saying it was never going to be solved and now it can be solved
through self-play. There’s a qualitative distinction that I think
I had missed, at least in people’s minds as to how distinct that is between AI through
mimicking and AI through just being AI and playing against itself and learning really
creative things that people have not done before. But even an AlphaZero, the DeepMind
folks say, “We also used,” I forget, “Four X, maybe 10 X less computation in AlphaGo.”
But I just heard yesterday from a student who used to be at Harvard, is now in finance
actually, but is kind of re-implementing these algorithms. He said even AlphaZero is taking
two weeks to train, which is a pretty long period of time and pretty hard for people
who are trying to advance these open source AlphaZero-like projects. It’s pretty hard
for them to get their hands on the computational resources, to try to do that effectively.
To even make it a- Even to be able to replicate what’s very recently
happened. So I think what I would like to see is I would like to see a public/private
partnership where governments around the world can add some resources and companies can add
some resources and help to make sure that we keep the competitiveness of what’s happening
within universities. Yup.
We need to be able educate, we need to be able to explain to legal scholars, to MBA
students, what AI is, what machine learning is. And we certainly need to make sure that
we keep creating more AI researchers and AI engineers who can go into industry.
So, I guess, excuse me, it would be great to have some water right about now. That’s
fine. So that’s really fascinating. I kind of see two ways in which we might get out
of this. So one of them is that we do some sort of public/private partnership or the
research that AI scientists are doing in industry is somewhat regulated so that you can’t keep
it completely closed. There’s some efforts now as well on creating these kind of partnerships
between, like the computational social science, for example, like allowing people to study
systems like Facebook. But then on the other hand, things could turn
into kinda pre-trained models, so the idea that maybe we’ll have models that are very
expensive to train, but once they are trained … An example of that would be pre-trained
image, like object detection systems. But if they are truly general purpose, once they
are trained they become a commodity and you can just trade them. You can even buy them
in hardware chip, so it’s like baked into the hardware. It’s like a mental module that
performs a specific function and then the engineering effort becomes an act of combination
and maybe tweaking some of the parameters or working on one specific thing. You’re trying
to improve speech, but you’ve got all the other bits working. You think that is a likely
trajectory? Oh, absolutely. And I think there are really
important questions around architecture for AI systems. How do we engineer AI systems
such that there can be innovation around different pieces that can be brought together to keep
taking us forward? And I agree that, for example, we’ve made really good progress on vision
and it would be great now if everybody could leverage the progress that’s being made and
now go ahead and work on using these vision systems to maybe improve our transportation
systems. Alternative vehicles making use of vision algorithms to detect objects.
Yeah. So, I remember in 2015, you published this paper in Science with Michael Wellman,
who we also interviewed in the series, on economic reasoning and artificial intelligence.
Right. And that paper was really fascinating because
it kind of brought kind of algorithmic economics and computational economics to the floor and
you made this very interesting observation about micro-economic theory. And the observation
is that economic theory has really been studying the wrong agent almost, that a lot of these
models from economic theory that turn out not to really capture the way that humans
make reason in economic situations, but are actually perhaps more accurate descriptions
of how a machine would make those types of decisions.
That’s right. There’s this lovely irony, which is that the machine model the economists have
been studying, knowing that it’s not necessarily the right way to model humans, but using it
because they’ve been able to get beautiful results. Lovely theory has come out of the
rational act model. At the same time, all this evidence that people fail to follow the
rational model in all kinds of ways. That theory, that theory that leads to how to design
economic systems, how to design markets, how to design mechanisms to affect good outcomes,
that theory that assumes rational acts that we argue in that paper should be more applicable
to AI systems than human systems. So kind of in summary, the economists were
really studying machine behavior for six decades, they just didn’t know it.
Yes. They were studying the behavior of an idealized human, which could look like the
behavior of a machine. Nice. So now, I guess when the behavioral
economists are convincing the rest of the economic establishment to start paying more
attention to human psychology and behavioral biases and so on, we need to make sure they
don’t all switch ’cause we want some of them to say and work on these idealized agents
because it is useful to understand the behavior of these.
I completely agree. They better not all switch. But also remember that the behavioral economists
will still be extremely important even in a world with rational AI’s because we have
the interface between people and AI, which is so important to get right and it’s so important
to have the AI be able to correctly understand what it should be doing. And if it cannot
understand that behavioral aspect of human behavior, it’s not going to correctly represent
our interests. So, we need both of those. But you’re right, if the whole economics field
shifted to studying the economics of people, then we have to pull them back because people
through augmentation with AI- May exhibit less of these sort of biases.
… May exhibit less behavioral biases and will better information-processing machines.
I mean, I find this topic really fascinating because as we were having this whole discussion,
at the same time there are people who are advocating for this kind of defending humans
and their irrationality, or so-called irrationally, from this assault that’s coming from the behavioral
economics and behavioral economists and psychologists. And this field kind of represented maybe evolutionary
psychology, kind of related behavioral ecology and so on. The idea is that humans, a lot
of things that humans exhibit that appear irrational are actually only irrational for
very artificial tasks that they are placed in the lab to perform. But in many cases,
the behavioral heuristics that people use that appear irrational in this context are
actually optimal in the environments in which we encounter ourselves in everyday life.
Or at least optimal in the environments to which we were evolved-
That’s right. … To be effective in, which hunter/gatherer
environments may not look quite like the office environment that we’re in today. But yes,
this idea that what looks like a behavioral bias can well be rationalized within the context
of a computational system with limited computational resources. It’s a beautiful concept of what
does it mean to be boundedly optimal? This was a term that was introduced by Stewart
Russell and his colleagues. There’s notion of I have a well-defined computational machine.
It lives in a well-defined environment, receives percept, has a notion of what its utility
function is and needs to make decisions, but if it tries to reason perfectly and it’s trying
to decide whether to take the bus or walk, if it reasons about that for too long the
bus is gonna go right past. So you have this need to understand the cost
of reasoning. This is also often talked about as meta-rationality, thinking about rationality
of being rational, but the bounded optimality construct is beautiful. It says give me the
agent program, give me the agent design, but out of all possible agent designs that could
run of this hardware, could be a human brain, optimizes performance in a real environment.
And from that perspective, the behavioral biases that we see, if we believe in the power
of evolution, are kind of appropriate and useful artifacts of what we’ve been optimized
to do to succeed as a race. So this reminds me a lot of Herbert Simon’s
notion of bounded rationality. Very, very related and it goes all the way
back to Herbert Simon. I.J. Good as well. Who curiously won both the Turing Award and
the Nobel Prize in economics. He’s kind of a perfect poster child for this discussion,
right? Everybody’s here.
So, I guess in this context one could say that if we are to look at the ecology of decision-making,
this notion from evolutionary psychology, kinda refer to it as ecological rationality,
you’re rational, even with those biases, you’re rational for a specific ecology, the distribution
of situations you encounter itself on its own. If we take this idea to its logical conclusion,
then it may very well be that machines would learn those biases, at least some of them
if they happen to be useful because it would be more cost-effective.
If machines are bounded in a similar way to the way people are bounded then they would
… And if people were … So let’s put it this way, if a machine had exactly the computational
architecture that we have and if people have optimized for their environment and if machines
want to be optimal for the same environment and had the same problem, then they would
exhibit the same biases. Now, I think though what we already kind of understand is that
the computational methodologies, the hardware if you will as well, the software and the
hardware, will be different. And so, to the extent that we don’t see rational AI, let’s
say in the near future, the ways in which it chooses to be irrational, if the system’s
nicely optimized, will be in ways that don’t really matter to the function of the machine.
That’s what we should expect. But I guess I would, my hunch though is that
if we were to take a rational agent, rational AI from 20 years into the future, that has
very different hardware, far fewer computational limitations, and we subject them to some of
the behavioral experiments that people have used to demonstrate human irrationality today,
they may also very well appear irrational because these experiments were perhaps not
measuring the right, they were not measuring performance that these algorithms we are optimizing
for because they are simply using a more limited task than this behavior is optimized for.
So in a way I find this idea appealing because it’s redeeming of humanity in some sense that
maybe we’re not so bad after all. Surely the machines will be better than us in some, in
maybe all domains eventually. But it doesn’t mean that we really suck today. This is something
that seems to be a bit of a trope nowadays. Yeah, I think that’s right and I think that
in defense of the behavioral economists I think that they would say that the experiments
that they design are trying to capture things that are really just kind of boiled down vignettes
of important everyday economic decision-making. I suppose what that means is that while we
can understand why people make mistakes and we can rationalize those mistakes the way
we’ve been discussing, it will be interesting when we think about the program of machine
behavior. It will be interesting to think about what is, if you will, kind of the harness
of tests, the harness of example machine tasks that we should be kind of putting around AI
to understand what has it been optimized to do, how effective is it across a spectrum
of capabilities. That feels like an important aspect of what it will mean to understand
intelligent behavior. So, it’s great that you brought this up because
I’m interested in the way that the computer science establishment kind of handles this
question because when I speak to … I mean, I’m a computer scientist myself, but maybe
a little bit of a fallen one. But when I speak about machine behavior to a lot of computer
scientists, there’s this … And sometimes when we, the reviews for our article, there’s
this notion that, well, computer scientists have been doing machine behavior, thank you
very much. We have ways of testing the performance of classifiers and object detection systems
and vision systems and speech recognition systems and so on. We have metrics, clear
metrics, for identifying how good they are, what types of mistakes they make and so on.
And then we have all these test environments for reinforcement learning agents whether
it’s games like chess or Go or poker or 3-D environments like 3-D games, or even the physical
world for testing physical robots. So, we’re already doing machine behavior.
Why do we need the behavior scientists? So why do we need these people who don’t know
computer science to come and tell us what to do or to study the artifacts that we built?
So what’s kind of your take on this question? Well the reason we need that is that our AI’s
that we’re building are increasingly going to be out in the wild and it’s one thing to
say that you have a input/output test, you have this, yes, very well-defined way to measure
errors, but one of the things we’ve been learning recently in deep learning for example, is
the fragility of the performance of learning systems when distributions change just a little
bit. It’s gonna be very hard to anticipate the ecology of AI’s and people and there will
be emergent phenomena that we cannot anticipate, there’ll certainly be different inputs and
outputs than we had imagined in the lab. I think also there will be interactions, all
of this will be about interactions and we do not have good, at least to my knowledge,
we do not have good existing test beds to understand how my AI is doing in an interactive
context. That’s much more difficult than most of the test beds that people work with.
Okay, so kind of the … A cynical computer scientist might say well, we’ll get to it.
Once the AI’s are in the real world, we’ll get to that and we’ll study the behavior of
AI in the wild. Why would a psychologist or an economist or a political scientist come
and sort of work on this and run experiments on my AI agents?
I think part of it is that you want to avoid biases that come in through things that you
just don’t think about as a field. I love computer science, but computer science does
not have a rich tradition as an experimental field. There are people who do wonderful experiments,
but we are not, we don’t have the 50, 100, 150 years of history of very carefully measuring,
contextualizing, experimenting. Why not bring other fields in to help to understand what
is happening with new computational systems? And of course they should want to and they
do want to because we’re changing aspects of the way society works, of the way commerce
works, of the way we interact with each other and these are questions that they’ll want
to study. And I think the challenge that we understand
is that as computer scientists we have to be patient, we have to be modest about what
we understand, what we don’t understand, we have to patient to explain what it is that
we’re doing and why we’re doing it, just as we need scientists in other fields to be modest
and patient and kind when they interact with our fields. But I think that one of the things
that internet has done is it’s broken down barriers between fields. I often describe
my own research as arbitrage, it’s finding cool ideas in one field and cool ideas in
another field, finding a way to smash them together and asking what might happen. And
we can do that today. I remember when I was a graduate student,
I literally would spend days, weeks, months, xeroxing papers. Now we don’t do that anymore,
we just go online and we find the paper immediately. So the internet has enabled this breaking
down of barriers between fields. There’s no reason that science has to be done in the
way it traditionally has been done. I’m all about growing new communities, bringing the
right people together to make progress. Yeah. Recently some scholars, I think Brian
[inaudible 00:52:45] and other collaborators have shown that there’s an increasing prevalence
of team, larger and larger teams, in the production of science.
Yeah, fascinating. Something that’s kind of a documented phenomenon
now. And the diversity, there’s recent evidence also that the diversity of the people within
those teams can also be predictive of success. And that’s why you have to be careful. That’s
a great example. Is it a causal effect or is it a predicted effect? Is it that the great
scientists know to put together diverse teams or is it that the diverse teams lead to great
science? Absolutely and I must admit this as a computer
scientist, I did not get any training in these methods. I mean, I had to teach myself these
things by working with other fields and showing some humility as well.
Humility’s important. To learn from my collaborators. Actually there
were times, I’m not gonna say exactly who, but there were cases in which I was collaborating
with somebody on a paper and I was simultaneously studying their work, their online course.
I was their student and collaborator at the same time.
Wonderful. But I’m not going to disclose the identity
of this person. But that’s such an amazing thing that we can do today that perhaps we
couldn’t do- We can learn through [inaudible 00:54:11]
as well, which is just wonderful. That’s right. One of the things I wanted to
ask you about also is you’re part of this 100 year study of AI that is kind of curated
by Stanford and I think involves people from many different universities. Can you tell
us something about this? Yes, yes, yes. So the AI 100 study, which
will convene a study group every five years to look forward 15 years and look back five
years. Okay.
Okay? So, I was one of the people on the initial study group. Our task was to look forward
to a North American city 15 years forward and try to think about how AI would impact
that environment. Now you can imagine we talked a lot about transportation, we talked a lot
about healthcare, we talked a lot about entertainment, we talked a lot about social relationships
in the workplace. Why 15 years forward? I think as scientists we find it very hard to
think forward. We have our research agenda right now, we understand the trajectory of
the research field, we’re AI researchers so we’re believers in the importance of AI and
the good that AI can do and we’re working on problems that people do not know how to
solve in our field. But we’re not necessarily good at predicting what’s going to happen
in the future. And perhaps there’s maybe a stigma even that
you’re becoming a futurist, you’re kind of bringing all these predictions out of thin
air and so on. Yeah. But I think what’s important about this
study is that it will bring a scholarly view to the progress that will be made.
Yup. That was always important to [inaudible 00:56:13].
He’s kind of the imagineer behind this whole effort. And it’s very important, there’s so
much hype and speculation about AI, so much misinformation. Let’s actually take a serious,
longitudinal look at what is happening, how is it impacting us. For good or for bad. It
won’t always be for good, but let’s make sure that we have this beautiful, historical record.
So I was very happy to be part of that and I joke that I will not be on a committee for
100 years. It’s a 100 year study, but not with me for 100 years.
So you’ll get out of that committee at some point.
Although some people are saying that if you are alive in 2050, you have a chance to live
then forever. But I think those people are precisely the
people who are making those long-term predictions that people find problematic.
There’s a lot of speculation. That’s right. But I think, so for me, the
most fascinating part is to study the evolution of the report itself. So it’s the evolution
of the predictions and the opinions and the fears.
That’s why we’re looking back and looking forward. What did we say? Where are we?
That’s right. Maybe now, for instance, all the talk is of fake news and the future of
misinformation in the political sphere and so on, and the public’s fear. Maybe in 15
years or 20 years, maybe we’ll be worried about completely different things and this
will seem trivial or solved or whatever. And hopefully AI can help us not to be swayed
through misinformation. That problem maybe will go away.
So how do you imagine that … This, I think, is a good segue to a question about the solution
to these types of problems, problems of fake news and misinformation that really kind of
take a life of their own. Kind of a combination of the medium, the processes of our feeding
the medium, which could be algorithms, they could be bots pushing fake news, maybe synthesizing
them even. But also social processes that are kind of amplifying and propagating this
information. As an expert in mechanism design and designing the rules as well as designing
the agents, the artificial agents within those roles, do you see this as a mechanism design
problem? We need to change the way that the news are served and they’re kind of shared
between people or is it something that is the way they are filtered? Or is it the way
people are using them? Is it a public education effort? What do you see as a potential solution
to this? Well first of all, it’s a really important
topic and many of my colleagues at Harvard are very worried about the impact that the
media is having on our democratic systems. [inaudible 00:59:19] for example, I’ve had
a number of conversations with him and he wants to understand what is influencing people
and how. And it’s not clear, by the way, that it’s the Twitters and the Facebooks. It could
be traditional media. Lots of people in America still get most of their media through the
traditional method, through broadcast. Economics are part of this broadly.
Yeah. Because a lot of the media ecosystem is driven
by advertising revenue, so absolutely no doubt that economics is an important part of the
puzzle. Why are people making very dramatic, often inaccurate statements? Because it drives
viewership, drives advertising revenue. So there’s an economic component to this. Whether
there’s a mechanism design component, it’s a little bit less clear. The problem I thought
about there is whether we can use mechanism design to encourage crowdsourcing of people’s
sentiments and viewpoints and thoughts about whether what they’re seeing seems to be reliable
or not. How reputable does it seem? ‘Cause I think part of the problem is that
we don’t always get the right signals back from the consumers, the information. The Facebooks
and presumably other platforms are trying to address that. They’re trying to make it
easier all the time to provide feedback. There’s a chicken and egg thing there because if you
make it easy to provide feedback, then the feedback system itself can be attacked.
Yup. That’s right. So there is an incentive alignment question
there. How can we filter, aggregate, incentivize the correct feedback? So should we get new
signals into the algorithms that need to be curating this information. So that’s a thing
where maybe people don’t necessarily realize, which is that we’re at a point where there’s
so much information, so much data, so many people participating in conversations that
moderation, filtering, aggregation, will need to be algorithmic. I mean, it’s great that
various companies at the moment have backed off, to some extent, using algorithms because
the algorithms were not trustworthy. They were making mistakes that were unacceptable.
So now there are people helping to do this. Yup.
But that doesn’t scale, so you need there to be this algorithmic system. But I do think
that we also need to be very careful to think that there’s kind of one viewpoint that says
this is right, this is not right. There are facts and there are not facts, but there is
a gray area. And I think that one of the roles for AI in lifting up individuals in society
is to help us to understand what’s important for our life so that we can filter information
the right way. But speaking of what’s important in our life,
don’t you think that sometimes illusions or delusions also play a very important role
in our lives? So we, in many cases, believing something that is false is a signal of your
membership in a group and there are all sorts of benefits that you get from your membership
in that group. And that’s an important thing. Like if I believe Harvard is better than MIT?
For instance, right? So, that’s a very important statement for you to make even if it happens
to be incorrect. Who am I to deprive you from your right to express that? Even if it was
factually incorrect, according to some objective metric. So my question then is it seems to
be a minefield because as soon as you algorithmically moderate something, even if you stick to the
facts and you say, “Look, I’m not going to moderate opinion unless it’s hate speech or
something like that. I’m only going to signal what is factually incorrect,” that is also
an intervention in a system that relies in part on things that are incorrect to function.
And there are all sorts of things that we believe as a society that are … We’ve never
been able to prove that are probably false, but they’re fundamental to our existence,
they’re fundamental to our cohesion, to norms that we have and so on.
So it seems to me that it’s a very, very difficult problem to solve.
This gets us right back to agency again. It’s so important that as technology continues
to advance, that we still have choices. I completely agree with you. I mean AI moderation
should not be about everybody seeing the same thing. I mean, good grief, that’s not a society
that we want to live in. But, I do think that there’s a middle ground here about exposing
the right trustworthy signals so that now your AI that’s helping to decide how to make
decisions on your behalf and helping you to decide where you should be paying attention
is making better decisions on your behalf. Yup.
I believe that a lot of what’s going on at the moment is because we are so time-limited,
we are so information-overwhelmed, and that that’s leading to a lot of the ways in which
our society is failing. So I guess it’s kind of relatively un-controversial
to say that if somebody wants to know if something is factually correct, it should be easy for
me to find out. Like a tool that will help me do such verification is a very useful tool.
Would be a useful tool. Now I want to retain my right to believe it
anyway, even if it’s false, I would like to check.
Yes. Not to use the tool, but I think a tool that
would help me verify if I care about the verification. For example, imagine how useful it would be
to be able to automatically be kind of a member of a … or an attendee that goes to a scientific
conference. Well, we all know that a lot of the conversations, the important conversations
that happen at conferences, are not the lecture and the questioning and the back and the forth,
it’s just the conversations that happen around the main event. And as you have these conversations,
there’s a consensus, there’s a common view that rises up about what people collectively
hold to be true about their fields, what results are people questioning, things like this.
So there’s these very distributed, socially influenced, messy conversations happening.
What if we had AI that would help of tease out from those messy conversations. Okay,
this looks like a consensus view held by scientists on what is happening with climate change.
And these are the things that the scientists at the conference are disagreeing about. That
would be super helpful. Yup.
We don’t have that right now. True.
Let’s hope we have that soon. True. Now we have mostly kind of partisan
talking heads that are pushing one view or another, selectively choosing the evidence
that supports their view. And then people who are kind of lapping everything up purely
to signal, perhaps, political ideology and so on, which is not very helpful for policy-making.
Okay. Well, that’s a pleasure and that was really a fascinating chat. So, thank you.
It was great. Thank you.

Comment here