July 26, 2019

A Summer Update (And what and why I study what I do)

So, a while ago I promised that I would upload a blog every single week. Let me tell you a secret: I lied. Well, I didn't exactly lie, but some things got in the way. Some things like a PhD program, a new internship, and general laziness. But since I'm relatively free from all the distractions, I want to take the chance to get back to my blogging.

So, what to talk about this week? I think it's time to set the stage for a renaissance, if you will. I'm going to talk a bit about my current research, and what I think are some of the interesting problems in the field that I'm working in right now. For those of you who don't know, my name is David Chan, and I am (currently) a PhD student in Computer Science at the University of California, Berkeley. My advisor there is John Canny and, while it seems like my one line statement changes every year:

I study how we can interact with autonomous agents in a human-driven and understandable way.

What does this mean? I want to study how we can design and build surfaces for humans to interact with autonomous agents (whether they are self driving cars, or self flying satellites, or facial recognition systems) in a way that is human driven. At the end of the day, the reason that we are building artificial intelligence is to drive human productivity. We want to take the boring and mundane things out of life, so humans can focus on their lives. Artificial intelligence has the ability to connect us with our humanity in so many ways - from drastically increasing the productive working hours of our day, so we can spend more time with the people who matter most, to solving global humanitarian issues such as global food shortages, and human trafficking.

Unfortunately, In the same way that AI has the power to greater connect us to our humanity, it also has the power to distance us from those same idyllic goals. We have already seen flawed artificial agents incite racist behavior, discriminate against ethnic minorities, and even kill people. This is why I want to study the ways that humans can interact with these agents, and how these artificial agents can be leveraged to solve human problems.

This drive to solve human problems became apparent during my undergraduate research work in 2015 with Dr. Mohammed Mahoor at the University of Denver. In my work there were were developing facial expression recognition algorithms - that is, algorithms that recognize when somebody is happy or sad, angry or disgusted from just a picture of their face. Humans are remarkably good at this problem. We can pick up on a multitude of queues that others provide, and ascribe the relevant emotion to that person. What is interesting with this research however, was not just that it was on the cutting edge of computer vision, but it also had the ability to make a direct impact on how we interact with robotic agents.

A robotic agent which can understand your emotion significantly changes the way you interact with that agent. If the agent understands that you are upset, it can use calming tones, if it recognizes you are sad, it can do its best to cheer you up. This "empathy", allows for rich, meaningful, interactions with artificial agents. So meaningful, in fact, that some autistic children are more willing to interact with robots during therapy than other humans.

This research started my investigation into how we can interact with autonomous agents - but it has continued ever since then. Last year, I interned with NASA JPL to study the ways in which we can entrust the controls of satellite mapping systems to learning agents. This internship, to me, was a good way to explore some of the real world problems when brokering a conversation between humans and AI. We looked at how we could cede control of mission critical technology to artificial agents, and how we can trust them in the loop to do the right thing. It turns out, that this is a very hard problem - which probably deserves a blog post on its own. I'll add that to the discussion ideas.

During the last three years at Berkeley, I have looked a bit closer on how we interact with AI systems, and I have slowly began to understand a major problem with current learning systems: they're not actually learning. Most of the recent breakthroughs in machine learning today are due to the increased ability to train deep neural networks - complex mathematical functions which learn to mimic an input and an output from a set of training examples. Until I write something, I recommend this intro for the non-technical reader. These deep learning functions learn to mimic behavior, but they don't understand what is happening, and the nuances of a problem. You cannot communicate with these agents. They are only trained as powerful mimics.

Not to diminish the power of these agents at all. Training from large amounts of data in order to make amazing feats of computer vision and technology possible is quite an undertaking, and quite a feat. But there's something more that we want. We want agents which can understand, reason, and communicate. Agents such as this can solve problems that the mimicking agents of today cannot solve.

But how do we get there? I'm interning at Dropbox this summer, studying the first part of that question, the "understanding." - but the second parts, the reasoning and communicating, those are a question for another day, and for the remainder of my PhD to think about. But providing powerful agents which understand and reason is only half of the story. The other half is providing ways that humans can interact with these agents - if an agent cannot effectively communicate what it has understood and reasoned about, then there is no reason for the agent to exist at all. The utopian (or perhaps, dystopian) vision, where the work is done by AI and robotic agents, and the humans are free to do whatever they wish to do with their time is a long time from coming to pass. One day it will be here though. That, or robots will kill us all, and dance among the piles of human bones.


I think that one of the things that makes writing consistent blog posts hard is a feeling along the lines of: "You have nothing to contribute to the online conversation". This feeling of despair is the same that haunts us when we say "Oh, I should post something on social media" or "Oh, I should send an email update to that one teacher I had in high school about what I'm doing with my life." You think to yourself "I have nothing to say... so I'll just say nothing." Well, let me tell you something. You're absolutely right. Ha. Take that. Then also let me tell you another thing You're absolutely wrong. While it may be the case that a thousand other people can say what you're going to say - the words that you put onto the page are uniquely your own. No matter how many people retell Macbeth (and there will be many more) every single retelling will be a little bit different, and a little bit unique. Or at least, that's what I'm going to tell myself. I've setup a weekly reminder to keep going, so perhaps these blogs will slowly become more frequent. If not, so be it.