Jeff Deiss 0:00
Greetings, this is Jeff, director of the Badger Herald podcast. And today we have a very exciting episode — we’re talking with Professor Kangwook Lee, part of the Electrical and Computer Engineering Department at the University of Wisconsin – Madison. And we’re going to talk about his research on deep learning and recent developments in machine learning. And also a little bit about his influence on a popular test prep service called Riiid.
So, I originally saw your name in a New York Times article, about Riiid, which is a test prep service started by YJ Jang that uses deep learning to essentially better guide students towards more accurate test prep and just overall academic success. But we can get into that a little bit later. So first, if you want to introduce yourself, and just give a little background on your life.
Lee 1:18
Alright, hi, I’m Kangwook Lee. Again, I’m assistant professor in the ECE department here. Came here in 2019, fall. So it’s been about three and a half years since I joined here. I’ve been enjoying a lot, except the COVID. But everything is great. I mostly work on information theory, machine learning and deep learning in terms of research area. Before that, I did my PhD in Berkeley — Master’s and PhD in Berkeley. Before that, I was doing my undergrad studies in Korea, I grew up in Korea. So yeah, it’s been it’s been a while since I came to the United States. I did went back to Korea for three years for my military service, after my Ph.D., but yeah. So yeah, happy to meet you guys and talk about my research.
Deiss 2:09
Of course, and that’s the first question I have. So with any topic related to machine learning or information theory, even as someone who studied this at a pretty low level in school, it can be hard to wrap your head around some of these concepts, but maybe just in layman’s terms, can you describe some of your recent research to give our listeners a better sense of what you do here at UW-Madison?
Lee 2:32
Since I joined Madison, I worked on three different research topics. The first one was, how much data do we need to rely on machine learning? That one, I particularly studied the problem of recommendation where we have data from clients or customers, they provide their ratings on the different types of items. And from that kind of partially observed data. If you want to make recommendations for their future service, we should figure out how much data we need. So that kind of recommendation systems and algorithms was number one topic I worked on. The second topic I worked on was called trustworthy machine learning. So by trustworthy machine learning, I mean, machine learning algorithms are, in most cases, they are not fair. So they are not robust. And others are private, they used to leak private data that was used for training data. So there are many issues like this. And people started looking at how to solve this issue and make more robust, more fair, more or less — more private algorithms. So those are the research topics I really liked working on in the last few years. I still work on them. Recently, I have started working on another research topic called large models. So large models are I guess you must have heard about like GPT, diffusion, models lips, those are the models that are becoming more and more popular, but we are lacking in theory in terms of how they work. So that’s what I am surprised to see in this case.
Deiss 4:18
Yeah, so I just wanted to ask you I often hear not necessarily in true academic papers, but just in the media, I hear about how some of these large models, especially if they’re convoluted, complicated neural networks or deep learning algorithms. I’ve heard them described as a black box, where the actual mechanics of what’s going on inside what what the algorithm is doing with the data is a little unclear from the outside, or as you have like a simple regression model. It’s actually pretty easy to work out the math of what the algorithm is doing with the data but with a large model, is that the case and can describe a little bit about that black box problem that researchers have to deal with
Lee 4:57
The black box aspect actually was for more general classes, let’s say entire deep learning, you can say they are kind of blackbox. I, I think that’s half correct, half incorrect, half incorrect in a sense that when we design those models, we have a particular goal that this, we want this to behave like this. So for instance, even if we call GPT, mostly are largely blackbox-ish, we still design the systems and algorithms such that it is good at predicting the next word. That’s, that’s not something just came out out of box we designed such that it predicts the next word well, so. And that’s what we are seeing in ChatGPT and OD GPT. So the, in terms of the operation or the final objective, they are doing what they people who designed wanted to do. So it’s less blackbox in that sense, however, how it actually works that well, I think that’s the mysterious part, we couldn’t expect how well it will work. But somehow it worked much better than what people expected. So explaining why that’s the case. That’s an interesting research question. But that’s what makes it a little black box-ish. What’s also very interesting to me is when it comes to GPT, and really large language models, while there is there are more mysterious things happening, going back to the first aspect. In fact, there are some interesting behaviors that people didn’t intend to design. So things like incontext learning or future learning. That’s basically like, when you use GPT, you provide a few examples to the to the model, and the model is trying to learn some parents from the examples that are provided, which is a little bit beyond that what people used to expect from the model. So the model has some new properties or behaviors that we didn’t design.
Deiss 7:00
Yes, and I want to get back to ChatGPT for another perspective and a little bit, but one thing I saw that you were recently researching, I saw come up in interviews is about the straggler problem in machine learning. As far as I know, it’s where a certain I don’t know if node is the correct term or just some part of the machine learning algorithm is so deficient that it brings down the performance of the whole algorithm as a whole. Can you describe a little bit about what the straggler problem is and the research you’re doing on it?
Lee 7:29
Yeah. So the straggler problem is, is a term that describes where you have a large cluster and your entire cluster is working on a particular task jointly. And if one of the nodes or machine within the cluster starts performing bad or starts producing wrong output or start behaving slower than the other, that the entire system is either getting wrong answers, or either they are becoming entirely very slow. So straggler problem basically means that you have a bigger system consisting of large workers, one of the few workers become very slow, or erroneous, the entire system becomes bad. That’s the phenomenon or the problem. This problem has been first observed in large data centers like Google or Facebook, about a decade ago, they were reporting that there are a few stragglers that make their entire data center really slow, and really bad in terms of the performances. So we started working on how to fix these problems using more principled approaches like information and coding theory, that are very related to large scale machine learning systems. Because large scale machine learning systems require cluster training, distributed training, that kind of stuff. So that’s how it’s connected to distribute machine.
Deiss 8:57
Very interesting stuff. I want to pivot away from your research for a little bit and just talk about how I originally heard about your name, like I said, In the beginning, I saw a New York Times article was about a test prep service. And why YJ Jang who started Riiid this test prep service, you said he was inspired by you to kind of use deep learning in his startup, whatever software he was originally creating, what is your relationship with him? And how did you influence them to utilize deep learning?
Lee 9:25
Sure. Here’s a friend of mine. He texted me with the link to the article is I was really interested to see that link to see the article. I met him about 10 years ago, when I was a student at Berkeley. He was also a student at Berkeley, but we didn’t know each other. But we both participated in some some startup competetion over the weekend. So we had when we drove down to San Jose, where the startup competition was happening, and I didn’t know him so I was on Find finding some other folks there. And we created a some demo and we gave a pitch. We won the second place, he won the first place.
Deiss 10:09
Wow.
Lee 10:10
So, and I was talking to him, “Hey, where are you from?” And he said he was from Berkeley. So I’m from Berkeley. So I got to know him from there. I knew he was a really good businessman back then. But, but then we came back to Berkeley, we started talking more and more. And we had some idea of having a startup. So we had some ideas, we spent about six months developing business ideas, and also building some demos. It was also related to education. So it’s slightly different from what they are working on now. But eventually, we found that the business is really difficult to run. So we gave up. But after that, he started his own business. And he started asking me, “Hey, I have this interesting problem. But I think machine learning could play a big role here.” So he started sharing his business idea. And then that was the time when I was working on machine learning. In particular, I was working on recommendation system. And I was able to find the connection between the recommendation system, and what the problem they are working with the problem they are working on is students are working and spending so much time on prepping test. And they waste so much time on working on something they already know, efficient test prep is no different from not wasting time on watching some, something that’s not yours on Netflix. So yeah, so that’s the point where I started this kind of idea, sharing the — sharing this idea with him. And in fact, deep learning was necessarily being used for recommendation system. So all these ideas I shared with him, and he made a great business out of it.
Deiss 11:54
Yes, definitely. Obviously, test prep services like this are some ways in which machine learning and deep learning models could actually help educators. But in the media, and I see all the time, it’s all about ChatGPT all that I hear like every day, there’s some new news about ChatGPT. And I think that actually the panel here at UW-Madison recently about students using this potentially to cheat on things that they didn’t think you could cheat on before like having it write your essay for you and stuff. As an educator or someone connected to the education system here. Do you think that these chat bots pose a threat to traditional methods of teaching?
Lee 12:32
My opinion, I would say no, I don’t see much difference between the moment where we started having access to say calculators, or MATLAB, or Python, those are some things that we still exercise when we are in elementary school. In elementary schools we are supposed to do 12 plus 13 or 10 minus 5, you’re still doing it. And of course, I mean, they can go home and to use calculator, and cheat. But we don’t care. Because at some point, unless you’re going to rely all those machines and devices to do entire your work, you have to do it on your own sometimes. And also you have to understand the principles behind those tasks. So for instance, essay writing is the biggest issues right now with ChatGPT. While I mean, you can always use ChatGPT without knowing anything about essay writing, and I think that’s coming is going to be better and better way better this year. However, if you don’t decide to not learn how to write essays, then you didn’t you end up not knowing something that’s really important in your life. So eventually people will choose to learn it anyway. And not cheat. In terms of how to fairly great them. That’s the problem. Yeah, I think grading is the issue. Entire education on breakout.
Deiss 14:01
Yes, that’s that’s kind of the thing. In my opinion, I thought a similar thing where if a student is really good, and they want to improve, and they want to have that good grade on the final exam, what’s — whatever it is, they’re going to learn what they need to learn. But when it comes to grading individual assignments, I feel if something were it can write your essay for you, it throws the whole, the whole book out the window, where it’s like, how do I know how to grade things if I can’t tell if someone wrote this by themselves for three days, or they put it into a chatbot essentially, regardless of ChatGPT kind of taking over the media and public discourse around machine learning. I often joke with my friends I say, if we think ChatGPT is cool, I don’t know what like Google is cooking up in the back for 10 years. Who knows what’s going to be here over the next decade? So in your opinion, are there more interesting developments in machine learning right now? People can expect to see and if so, what do you think they are?
Lee 14:56
Yeah, but before we move on, I think Google also has a lot of interesting techniques and models, but they are just slower in terms of releasing them and adapting them. So we’ll see, I think the recent announcement on part is super interesting. So we’ll get to see more and more coming like that. So anyway, so talking about other interesting matters. Other than larger models, what also interests me, there’s these are diffusion models, I guess, perhaps most have heard about, like data lead to where the model is where you provide text prompt and throw something for you. That was more or less fun, activities, because you couldn’t do much with that, like textured image model. But I think the fundamental technique has been applied to many different domains. And now it’s being used for not just for images, but for audio music, something else like 3D assets, and things are going wider and wider. And we will probably see a moment where these things become really powerful and being used everywhere, basically. So I don’t think we need to draw any diagrams by hands. When you create a PowerPoint, you just need to type, whatever you think, how it should look like. It should be able to draw everything for you. And any design problems any I’ll say, think about web design, product design, things are going to be very different. Yeah.
Deiss 16:35
Yes. I guess just to wrap it up, do people like to kind of fear monger about a lot of this stuff like this is going to destroy the job market, everyone’s going to be automated away? That’s just one thing I hear. But people people do have concerns about just the prevalence of machine learning that’s kind of emerging in our lives. Do you have any concerns about what’s going on right now, in the world of machine learning? Or do you think people might be a little too pessimistic?
Lee 17:03
There are certainly — I will say there are some certain jobs that are going to be less useful than now. That’s clearly a concern. However, for most jobs out there, I think, either they can be benefited from these models and tools, their productivity will become better. And they probably can make more money if they know how to use these tools better. However, for instance, let’s say concept artist, or designers, for instance, talking about this diffusion models. At some point, these kind of automated models could become really good at doing almost a job almost as good as what they’re doing right now. And that’s the point where it’s really tricky because either we we’re gonna see some two different markets, right now, if you go to pottery market, then there are handmade potteries. And factory made pottery is no one can distinguish, to be honest. Yeah, handmade pottery is even more unique. They have some slightly different ways of coloring, and it actually has a little bit of defects that made this handmade pottery is look even more unique and beautiful than the factory made ones. But back in the days, we used to appreciate factory made like pottery, no defect, completely symmetric. That’s what human couldn’t make. But I think we are going that way. Because now models are going to be better at making perfect flawless architectures and designs. And probably what we will do as a human designers and artists have a little bit of I wouldn’t call it flaws or defects, but we’ll turn look like what machines can make. So maybe those two markets will emerge. And maybe those two markets will survive forever, like pottery market. So I don’t know, I cannot expect what will happen, but I’m still optimistic.
Deiss 19:05
Awesome. I think that’s a good end it off on a high note there. And thank you for coming to talk to me today on the Badger Herald podcast, and I’m excited to see what you do next in your research.
Lee 19:14
All right. Thank you. It was great talking to you.
Deiss 19:15
Thank you so much.