
In our ninth installment of the series, Rob Cesternino welcomes the Director of Optimal Robotics Lab, robotics professor, Survivor castaway, comptroller of slam-town and great friend of the podcast Dr. Christian Hubicki to respond to questions submitted by RHAP listeners.[00:00:00] Klar können wir Multitasking, aber wenn's drauf ankommt, sind wir froh, wenn wir uns auf eine Sache konzentrieren können. Das neue Samsung Galaxy S25 Ultra macht's möglich, als dein persönlicher AI-Begleiter. Aktiviere Google Gemini und frag die AI zum Beispiel nach passenden Restaurantoptionen und teile sie mit deinen Kontakten. Das klingt dann so. Hey, such mir ein indisches Restaurant in der Nähe raus und sende es an Luca. Easy. Was das Galaxy S25 Ultra noch kann? Erfahre mehr auf samsung.de.
[00:00:30] Hey, everybody. What's going on? Rob Cesternino back here for one of the very fun podcasts we get to do here. I believe that we are in year five now as we enter 2025 of getting to ask all of our questions to Dr. Hubicki. Christian Hubicki is here. Christian, how are you? Doing great. It's great to be back. And it's been quite six months since we last talked, so it should be a good time.
[00:00:58] Okay. Wow. All right. Yes. If you just like we're in a coma and then just binge the Ask Dr. Hubicki podcast episode starting in 2020, you would be in for a wild ride. We cover a lot of stuff, Rob, and there's a variety of topics, scientific, reality show-in.
[00:01:19] Somebody should binge all of them and go back to like our first one, which I think we did like back in like November 2020. And it's like how quaint. How quaint indeed. The peak of the pandemic. And if anyone does this, if anyone does this, reach out to me. I would love to hear. I want to start cataloging all the questions that I've answered. I keep meaning to re-listen to all of myself to make sure I don't say the same story twice. But now we've done enough of them that I'm starting to, the brain is starting to fizzle out.
[00:01:49] So reach out to me over the- November 26, 2020 was the original Ask Dr. Hubicki. Yes. Yep. I think we took a little time off in like 2021 because I think we had all of, you were, you were, I didn't want to bother you because you were watching all 40 seasons of Survivor all in a row. I'm like, I'm going to let Rob to his own, be to his own devices for 12 months. Yeah. The poor man.
[00:02:16] Okay. But here we are back and we have great questions from the listeners, which Christian has not, you know, I was talking to this with my wife. She's like, oh, he just like off the top of his head answers this stuff? Like, like, no. No, no, no, no, no. To be clear, I want to, what's set the expectations right?
[00:02:34] I, I, we, we see the questions in advance and somewhat, sometimes you throw me a curve ball, Rob, and that's totally cool. I'm cool with that. But you know, these, these questions are complicated. And while I think I could come up with an answer off the cuff or something, but you want to get the details right. You don't want to send people, people, Rob, people have cited this podcast in their term papers. We can't give them misinformation off the cuff.
[00:02:56] Yeah. And so Christian takes great pride in, uh, he's an educator. He wants to get things right. I think that he is unlike the other 99.99% of the reality TV personalities that he cares about getting it right. Well, I won't speak for the other, the other, uh, my cohort, but I, I, I definitely care. I definitely care. Uh, so, so that, so if you have corrections, you know, be polite and reach out to me on the, on the socials and I would love to hear them. Okay. Christian, what's new with you?
[00:03:25] Oh, it's, uh, so it's been a blast of a year, uh, Rob. I think we last talked for an Ask Dr. Kubicki in, uh, June, July, June of last year. And, um, yeah, in that, in that time I've, uh, uh, I, I, I went to DragonCon. You know what DragonCon is? You know, this is a, this is like a Comic-Con type conference. Yeah. Okay. Yeah. And I, I was, I, I was, went there and I gave a bunch of robotics talks. Oh. And that was really fun. Like it was basically sci-fi themed robotics talks. That was, I, I, and, uh, I've been doing this.
[00:03:55] Now for two years and I want to keep doing it cause it's fun. So like trying to get more talks out there where I'm talking about, you know, science meets, you know, pop culture. And that, that, that, that's a good time. And otherwise I've been, uh, been doing, uh, doing the professor thing, running research projects and keeping me busy. Okay. You have a, uh, very elaborate equation behind you in the shot. Uh, is this anything that's worth talking about? Oh, well, this is your work.
[00:04:21] This, this is, this is technically yes for my work. This is actually for my class that I'm running is called applied optimal control. If I were to, you know, move my camera up, it says applied optimal control up there. So this is a smattering of equations from the class and sort of in the background. Cause I, for those who don't know, it is now currently snowing in Tallahassee. And so we have snow days and, and I can't get behind on the term. So I'm virtually recording my lectures. So I do it all from this setup and this is my backdrop and I can always point at equations in the middle of it. So yeah.
[00:04:50] Okay. All right. Let's jump into some of these questions and let me start with a topical reality TV one for you from Bob Denver. Okay. Yeah. Famous name. Okay. Can you explain the best strategy for choosing the cases on Dondi? When it comes down to two cases, is it similar to the Monty Hall problem? Well, that's a great question. Bob Denver, the Bob Denver, right? We got the one confirmed, confirmed. Yeah. So,
[00:05:20] so deal or no deal, Dondi Island. Deal or no deal, Dondi Island is, is otherwise known as Dondi for those who, or, who are, who are not regular listeners. It's a new, second season of the show is going on right now. Yeah. Are you a regular viewer of Dondi? I'm a recent viewer of Dondi. I'm a recent viewer. I'm watching this season. I caught bits of the last season and I'm sort of catching up on the format, but, but lots of, lots of math is talked about and very little of the math doesn't make me angry.
[00:05:51] So, it's a, it's, it's, it's, it's quite a, it's quite a rollercoaster to the point where like, I feel like I understand what's going on. Then all of a sudden, I'm not sure. Maybe I'm still getting used to the format, but I'm enjoying it. I enjoy the ride. You got some good cast. I hear there's an actual deity on the season. Am I, do I understand that right? An actual God of some kind? The golden God. Yes. The golden God, the golden God of Australian survivor. They know it's, it's, but it's an interesting show and it brings up a lot of mathematical questions.
[00:06:16] And there's always arguments over what the statistical best decision is to make in different situations. But the part of this question that's of interest is we talk about choosing the cases. I'm assuming this is during like what we call like the deal, what you were dealing with the banker, right? So, so for those who are not familiar with deal or no deal or deal or no deal Island, it's new, it's, it's, it's newer instantiation on an Island.
[00:06:43] The key game is that you are given a case and, and this case is closed and it could have any, some number of dollar amount in it. And there are also like a bunch of other cases, depends upon the format, how many other cases there are. And they will range in dollar amount inside the case from like a penny to a million dollars or some version of that. Right.
[00:07:03] Right. And so there is a, so, so there, so as a result, you have a case in your hands and you then select cases to be removed from the other cases that you open up and they are now out of the running and you open up and it reveals what's in them. Right. And so you hope you're hoping and praying the million dollars, not in the case that you are, that you are revealing. And then as you, the case cases dwindle, the banker makes a deal with you and says, okay, I'll, you give up the case you're holding and I will give you this much money.
[00:07:32] Yeah. And it kind of, that, that amount of, I, I'm going to go a little all over the place, Rob. I was trying to explain this concept to my, one of my fellow professors who's, who's an information theorist. I was talking about Dior and I was talking about Dondi with him today and I was trying to explain to him the rules. So this is how you give me flashbacks. And he's like, and he's asking this, the deal, like a weighted average of the cases. I'm like, it's kind of, kind of, sort of, kind of not, but anyway. So that's, that's the premise.
[00:08:00] So that's the premise of how you play deal or no deal. It sounds very similar to a classic game, which we call the Monty Hall problem, formerly known as let's make a deal. It's a bit more formally known as that, which was a similar sounding show where there are three doors and it has this classic mathematical conundrum associated with it called the Monty Hall problem. Cause the host is named Monty Hall. We've talked about this many times in the podcast, Rob, but just a little bit of background.
[00:08:26] And so it's similar in that one of these doors is a winner and two are losers. You choose a door and you choose a door. And then Monty Hall goes to one of the other doors and says, I'm going to reveal this door. And it's not a winner, right? Leaving you with only two doors left. And the classic conundrum is, do you stick with the door you've got or do you switch?
[00:08:49] And this is like a well-known famous math problem that people shown that even though intuitively it shouldn't matter, intuitively you might say it's 50-50. So it doesn't matter whether you stay or switch. It's actually better to switch. You in fact double your odds by switching. That's the classic Monty Hall conundrum. So you might ask yourself, is deal or no deal the same thing?
[00:09:10] If you are sitting on this case that has some money in it, and then you are eliminating all these other cases that could have the big money in it, they're eliminating, eliminating, eliminating, eliminating. Does it make sense to switch? Does it also improve your odds to do so? It looks like that, but it's not. It is fundamentally different. Dondi is fundamentally different than Monty Hall or let's make a deal or LAMAD. Dondi is not equal to LAMAD.
[00:09:41] Okay. So you're talking about, so somebody that, okay, I'm on deal or no deal. I've gotten to the end. I have suitcase number 12. Okay. Yes. And so that there are, there is one suitcase that's left. I have my suitcase. The suitcase that's up on the board. I don't know what that is.
[00:10:03] And there's one that I have selected and the banker, I see that there is, I will know that there are two numbers of like what the two potential options could be. One will probably be high. One will be low. And the banker's offer will be something in the middle. And so if the suitcase I keep is higher than the banker's offer, I stay. And if it's lower, I made a bad deal. So at this point, it's just straight up 50-50.
[00:10:32] There's no difference to switch. Correct. That's how it works out. And I understand. And I am preparing for this podcast. Nicole, don't, I don't, just say I prepared for this. I went, I fully, I went through the process. My wife is listening. That's fine. That's a hate. I'll text her later. Anyway, I don't have her number. Sorry.
[00:10:51] The, the, when it comes to this, so, so the Monty Hall problem, I talked about it many times on Twitter, here, in research meetings. I literally, this last week, I had someone give a presentation on the Monty Hall problem in an engineering research meeting. So I see, I think about it all the time. And I was thinking about it again today to prepare for this. And again, and I got in my own head again. It's like, is it really the case? What if I consider this? What if I consider that?
[00:11:18] And I had to go through this whole process again of convincing myself, yes. And in fact, and Monty Hall makes sense to switch. And in deal or no deal, it does not, that does not matter. But for those, for just like a preview as to how you can try to demonstrate this for yourself. Okay. As to why it's the difference. I have a handy chart, which will be nigh indecipherable that I'll more, I'll talk through a little bit. Thank you so much, Rob.
[00:11:42] So the, so for those who can't see this because you listen to a podcast version, what I have sort of, what you can imagine is sort of like a tree where you have like branches of reality. And so I'm going to do the situation where you have three cases. Okay. Let's say you have three cases and you're playing deal or no deal. And you've chosen the first case. We'll call it case A. You want to figure out which of these cases is a winner. Okay.
[00:12:08] And, and so there's a one third chance that the winner is in case A, B or C. All right. So, and what you want, need to do is then work through all of the different possibilities of, do you then open up case B, case C or case B or case C for all three of those. So basically those three branches then become six branches spread off, split off again. And then once you've opened those up, I'm trying to avoid my ring light here. Sorry. And once you open those up, there's again, two options you could do.
[00:12:38] You could trade your case or you could keep your case. So then you go from six options, they branch off again into 12. And you go through all of those different possibilities and you work out that there are equal number of times you win when you trade versus when you keep it. Okay. That's the way you can work it out.
[00:12:57] And with the Monty Hall problem, what's critical is that the, is, is that Monty Hall there always makes sure that he never opens up the winning case or the winning door, whichever it is. So as a result, it takes some of these branches and it, oh, that's, sorry, my whiteboard marker, not working. Take some of these branches and he crosses them off. They no longer exist. So some of the probability of those gets shifted to other branches.
[00:13:27] And that asymmetry is what makes the Monty Hall problem different than deal or no deal. It actually forces more probability into the win column if you decide to trade as a side to keep your case. So that's how you can work it out for yourself because I definitely haven't gotten on to days-long Twitter arguments with people about it and definitely don't still think about it like it's my Roman Empire. Okay. Okay. All right. I think this would be great on Dondi. Oh, thank you.
[00:13:56] Well, you know, producers, you can find me on Insta at Chubicki, et cetera. You know where to find me. We should put this in the Dondi podcast feed. Get it in front of the right people. That would be great. I'm actually thinking of doing like a real, I want to make a bit of a pivot to more public-facing stuff with my science education. I'd basically have Dondi not equal to Monty Hall, you know, it'd be my thumbnail on the YouTubes would be like, you know, with me making some kind of hilarious face like the YouTubers do.
[00:14:26] That's what I'll do. So get this out there. Okay. Thank you. Thank you. Yeah. Okay. Allison has a question. I've never been able to solve a Rubik's Cube. Any advice for beginners? Christian, have you ever solved a Rubik's Cube? I have. I have. I was into cubing, as I think the children called it at the time, back in college. One of my fellow engineering students kind of got me into it, and they showed me how it kind of worked, and I thought it was really interesting.
[00:14:55] So he taught me a few techniques to how to solve it. Now, to be clear, I was never a speed cuber. Like my best time was like under a minute. I can get under a minute. I can solve a cube. Oh, thank you. That's good. Now, the records are like four seconds. So I am nowhere near that. And if I were trying to do it on the podcast right now, I'd probably choke. But I actually – it's been so long since I did one that I could only find one Rubik's Cube in my house, which I brought with me today. And I will try to narrate this for your podcast audience.
[00:15:24] It gives you a primer because I'm not a pro. I'm not a pro. But I can give you a primer as to the key things to think about when you're solving a Rubik's Cube. Okay? Okay. So this is a Rubik's Cube that Emily had. It's a novelty Rubik's Cube with nonstandard colors on it. So when people first start a Rubik's Cube, they start saying, hey, I'm going to start one side at a time. Okay? I'm going to try to finish off one side, make it all one color. See if that gets me anywhere. Not a bad idea.
[00:15:51] But it's actually kind of the wrong idea in a subtle way. You see, I'm looking here at this cube. And I'm going to arbitrarily say that this green side is the top. Okay? Now, you are not actually moving around sides of the cube. You are moving around pieces of the cube. For instance, if I look at the corner of this cube right here, I see that there's a corner piece that has green, white, and orange on it. Those are the three colors I got on that.
[00:16:18] There is only one piece that has green, white, and orange on it. As a result, it's got to go up here. All right? And what do I mean by up here? Another facet of the Rubik's Cube that's interesting is you look at the center of each of the pieces. Okay? No matter how many times I twist this Rubik's Cube, the center of one of those sides will never move. Okay? On my cube, my green is opposite to yellow. All right? On this particular novelty cube.
[00:16:45] So green, middle, the middle of green will always be opposite to yellow. So as a result, that means that if I'm twisting this cube around, that piece I was just talking about, I know exactly where that piece has to go. Not any old green side, the piece. So if you think about the fact that you're moving around pieces and not sides, that leads to some interesting implications. The first of which is that don't think about sides. Think about layers.
[00:17:15] Okay? We solve this Rubik's Cube in layers. And what we mean by a layer is if you take the top of the cube, you twist it around, that top twist, that's layer one. Okay? That middle twist, if you twist the middle around, that middle one, I'm trying to do it on stream. That's layer two. And on the third one, there's layer three. Because each layer has a certain set of pieces that have to go on that layer. It doesn't matter if you have a side.
[00:17:44] It can still be completely wrong if the pieces are wrong. So if you solve at a layer at a time, you are actually a lot closer to solving the cube. That's step one, Rob. I'm happy to pause there if you have any questions or if it's also obvious. This is fascinating. Okay, excellent. So you want to solve layers. So let's go back to the cube. And so what that means is you actually are best solving the cube one layer at a time. So I'm going to scramble up the cube a little bit, okay?
[00:18:14] And I'm going to give you an example. Okay, just running a little series of moves here. So you can see here, if you're watching the podcast. It's all messed up. Well, specifically, the top actually looks okay. This is called a screen. That's top. Top looks fine. Twist it around. In fact, that's the top layer. The second layer also looks fine. But the bottom layer I've screwed up, okay?
[00:18:39] So what you need to do are come up with a series of moves that only screw up a certain layer of the cube, okay? And shifts them around, okay? So I'm going to do a quick thing real fast. So basically, the top two layers of this cube are solved, all right? And what I'm going to do is I'm going to try to come up with a series of moves, and I already know this series of moves, that's going to pop out one of these middle layer pieces and move it elsewhere, okay?
[00:19:06] So you can't see what I'm doing, but definitely no pressure on doing this in the podcast because if I screw up, it'll screw up the whole demonstration. Okay. All right. Now I have the top layer and middle layer are right except for one piece right there. I try to do this in reverse as challenge. That one piece is wrong, okay? So I have what we call a series of moves, which we have a name for in cubing. It's called an algorithm. A series of moves is called an algorithm.
[00:19:34] That's able to just move one piece on the middle layer, okay? So you come up with series of moves that only move a piece on a particular layer, and you solve it one layer at a time. So that's sort of step two. Is that making sense so far, Rob? I'm hearing the words you're saying. I don't know necessarily how you're going to do it. Okay. That's totally fine.
[00:19:58] So the easiest thing that you can start by doing is start with trying to solve the top layer of the cube. When you decide that one side is the top, that's your top layer, okay? So what you want to do is if you're trying to solve this as a puzzle and not trying to speed cube. If you're trying to speed cube, just look up algorithms on the internet. They will tell you series of moves. You want to move that piece there? Use this algorithm. There are literal instructions, so then it becomes a party trick.
[00:20:26] So a lot of people, a Rubik's Cube is a party trick, okay? So, but if you want to go through the experience of solving the Rubik's Cube, think about it one layer at a time and start with the first layer. So what I'm going to do is I'm going to move one piece out of the way, okay? So if you're looking at the video feed, I've moved one piece off the top layer, which at the top is green, and then you see one is out of place, okay?
[00:20:50] So then you could try to – then you don't care if everything else is kind of getting screwed up as long as that top layer, whatever you put onto it, doesn't screw up the rest of the top layer. So I'm going to just make a – so I'm looking around for that one green piece I've moved. It's right here, and I want to move it to the top. And I know this is super interesting for podcast listeners, so I'll move it along.
[00:21:13] But I can, in fact, just slide – so I can take the top, I can slide a piece down, slide it over, and then back up. I know that was probably incomprehensible to see, but it puts it back in place. That screwed up all kinds of things beneath the top layer. But are you better off because the top layer is set? Exactly. You're better off because the top layer is set.
[00:21:38] Now, anything you want to do from now on is to be a series of moves that when you've completed them puts the top layer back in place but is scrambled around the other pieces. So you can screw around with that and say, okay, all right, so I'm going to do this series of moves. This will put my pieces back in place but shuffles all the other ones around. Then you can sort of on a piece of paper say which piece went where.
[00:22:03] And you can come up with your own series of moves or an algorithm in order to then layer by layer solve the cube. That works for a 3x3 Rubik's Cube that I'm holding. It works for a 4x4. It becomes more complicated, but that's the process. Okay. Thank you for listening to that, Rob. I know that. And also, audience, audience, thank you for listening to a visual demonstration. Yeah. No, that's incredible.
[00:22:28] I think at one point I tried to look up a YouTube video of how do you do it, but I feel like my son is able to do it, but I don't know if he's just like messing with us. Anthony or? Dominic. Dominic. Dominic. Yeah. So, yeah. I mean, well, if he's ever, ever wants to. When I was a kid, I used to take the stickers off and then try to put it back together. I mean, that'll do it. And if you want to really screw someone up and done a cube, you just swap any two different colored stickers and it will make it unsolvable. But, you know, here's the thing. Like when I was a kid in the 80s, it was like, okay, you got a Rubik's Cube.
[00:22:57] And then once it was messed up, like there was no resource. Like you couldn't go on Google or watch a YouTube video or a TikTok of here's how you solve a Rubik's Cube or listen to a podcast. It's like you got it. It got messed up. And then it was like, well, I guess I'll never solve this thing. Well, and that's kind of, there's sort of a beauty to that because like nowadays you can go on TikTok and then do it. You get almost, like if you're trying to do a cube like this for the first time, you get attached to like, I had this whole side together.
[00:23:26] It's now, but I, like I made progress, maybe two sides. I think that's what I did when I was a kid and I had one of these things. But I think that, so if you're trying to get started with it, just thinking about the fact that you're moving pieces, not sides. And therefore that means you want to solve it layers at a time that fundamentally changes your thinking in terms of how you're solving it. So it's a new year. You know what that means, setting big goals. Maybe you promised yourself you're going to hit the gym every day. Or maybe you said you're going to learn to make fire with a flint for once.
[00:23:56] Or the classic, save more money. But let's be honest, New Year's resolutions tend to fizzle out by February. Wouldn't it be great if at least one of those goals could be automatic? That's where Acorns comes in. Today's episode is sponsored by Acorns. You've probably heard me talk about them before and I'm excited to share how Acorns makes it easy to start automatically saving and investing your money. So it has a chance to grow for you, your kids, and your retirement. You don't need to be an expert.
[00:24:22] Acorns will recommend a diversified portfolio that fits you and your money goals. You don't need to be rich. Acorns lets you invest with the spare money you've got right now. You can start with just $5 or even just your spare change. You don't need a ton of time either. You can create your Acorns account and start investing in just five minutes. Basically, Acorns does the hard part so you can give your money a chance to grow.
[00:24:45] Using Acorns is a game changer for somebody like me who spends a lot of time thinking about strategy, whether it's on Survivor or behind the scenes. It's refreshing to have an app that makes investing simple and automatic. It's a small step that gives me confidence that I'm building a better financial future without stressing over the details. If you've ever felt overwhelmed by investing, this is a great place to start. Head to acorns.com slash robpod or download the Acorns app and start saving and investing for your future today.
[00:25:14] This has been a paid non-client endorsement. Compensation provides incentive to positively promote Acorns. Tier 1 compensation provided. Investing involves risk. Acorns Advisors, LLC, and SEC Registered Investment Advisor. View important disclosures at acorns.com slash robpod. Christian, I've got a question for you from Karan who says, what is the hardest thing you've ever had to do, excluding Survivor?
[00:25:41] So thank you for asking this because I have trauma I need to unleash onto this podcast about the harsh thing. You can trauma dump on me. Thank you, Rob. Thank you, Rob. The hardest thing I have had to do, perhaps including Survivor, was hang a painting. And it sounds not – yeah, it sounds not hard when I say it like that. Hang a painting. So you think you've got to understand.
[00:26:06] So my wife, Emily, her mother is an artist and has artist friends and friends of the family, one of whom gave her family a painting. And Emily really wanted this painting in our house. And I understand. It's made by a family friend. It's sentimental. The challenge of this painting – so I'm cool. She's great at putting art around the house. I leave that to her.
[00:26:34] Otherwise, you're looking at my art right now. A whiteboard – seriously, if you saw my office here, it's highly functional. This is my art. So I'm happy to let her do it. The challenge with this painting is that it wasn't mounted on anything. It's not – it's not mounted on like an actual hard board or anything. It's just a piece of canvas that is eight feet long and five feet wide.
[00:27:00] And that is a very large canvas that you need to hang. And the place that we need to hang this in the house, the only place it will fit is in our foyer, which is two stories tall. And it needed to be hung at the – where the top part is at the very top of the second story for which there is no way to reach it but a ladder. And I hate heights. I hate them.
[00:27:24] And the sheer logistics of getting this painting hung up on the wall, which by the way – so the top part of this wall is going to be 19 feet in the air. So which if you were – Angelina would know that that is a scary height. That is terrifying. It feels like 50 feet. It's horrible. And so I hate heights. And so in order to create – in order to hang this like loose-fit canvas painting, think about the logistics of it.
[00:27:49] How would you hang a five-foot long – a five-foot wide, eight-foot long canvas painting 19 feet in the air? And the answer is you buy a very tall ladder. I bought a gargantuan ladder. And I measured where I want different mounts to go. And I had to take the ladder and individually move it to all the different mounting points and then climb up, mark it, climb down. Climb up with a drill. With – by the way, one hand on the drill, one hand on the ladder.
[00:28:18] So there are points where I have no hands on the ladder. I would die if I fell probably or at least be horribly injured. And do this over and over again. And then to the point where I have drilled holes. I have put the mounts on there. And then the final step of this process was getting this painting up high. And so keep in mind, I have to climb this ladder. It's at an unsafe angle. It is extremely steep because my foyer is thin and not wide but tall. So my ladder is at an unsteep – a very steep angle.
[00:28:49] It's very high up. And I'm – so I have to climb up with the canvas painting draped over my head and down my back like a cape. And I have to climb the ladder with legs only, leaning forward, kind of like rubbing up against the ladder as I step up one step at a time until I open the air. So is there a frame for the painting? No. No frame for the painting. Just canvas. So this whole thing is loosely on there. And I have to like take the four mounting points and just grab two of the mounting points on the canvas.
[00:29:18] And then take my hands and slide them up over and nail it on one clean stroke to land them on their mounts. And that took like three attempts. And I am sweating like a banshee. And I'm like, I managed to get two of them on there. And the next day I came and finished the other two. It was the scariest thing I've ever done in my life. No, it's a nightmare.
[00:29:40] I can't tell you how many times my wife loves things that go into the wall, whether it's, you know, whether it's pictures, whether it's floating shelves, whether it's like any kind of thing that, oh, I need this on the wall. I need this on the wall. Yeah. Yeah. And I tell people, I was telling my extended family this story. And also like I was dreading and they're like, it's like, that's Christian. That doesn't sound that scary. And then they come and see the house like, oh, that's high up.
[00:30:10] It is very high up. And on top of this, like I was dreading doing this for so long that this eight foot long painting was sitting in one of the rooms of our house on the floor because you can't fold this thing. It's got paint on it. You're going to crack it. So it's taking up an entire room of the house for months at a time. And I'm like, I got to put this up. And on top of it all, the painting itself is a very nice artistically done painting.
[00:30:37] But it is of a Japanese woman preparing to bathe partially nude. Yeah. And so family friend made the painting. It's a very nice painting. And I'm just like, I'm going to put this up there. Everyone's going to assume this was Christian's idea. This was Christian's idea for a painting. No one's going to think this was Emily's artist friend. So I'm going through all of this for people to think.
[00:31:01] It's like, why does Christian want this partially nude giant painting in his foyer of a woman trying to bathe and risk his life to do it? So, yeah. So this was a lose on several levels. But it is a very nice painting. And now it is in our house. It only needs to be touched up about once a year where I'll risk my life again. Okay. Well, all right.
[00:31:26] Since we're on the topic of Japanese women, Christian, a different Christian has a question. Has Christian heard of the town in Japan that is intentionally blocking the view of Mount Fuji because of the annoying tourists? Christian has heard of this. In fact, I confess this is the first time I've ever done this. Christian planted this question. Yes. Oh, you're the question. The Christian question. We've had questions for people in Christian before. It's a reasonable, reasonable guess. So, yeah.
[00:31:55] So I have heard of this town. I won't. Yeah. Go on, Rob. Yeah. I feel like I think I may have this rings a bell for me. I feel like something that we might have talked about on a news AF. Oh, really? Interesting. Interesting. Not recently. So it was in the news like around May of last year. Okay. And I was reading about it. And it's a coincidence because we are about to actually go to Japan. Oh. In May. We're about to leave. And I was reading the story. Oh, that's crazy.
[00:32:24] And so I had actually had a robotics conference in Yokohama. My student was giving a presentation of our paper on robots that can walk and roll. It was pretty – it was a cool paper. And we – and Emily – and I was like, look, Emily, she loved – she worked in Japan for a year. And she would love to go back. And so it's like, hey, let's go back, both of us. I'll do the conference for a week and then we'll go and travel around Japan. And I was like, we got to do this.
[00:32:53] And that took zero convincing on my part. And so we – so she did a whole travel schedule of going around. And sure enough, she booked a trip to a town around Mount Fuji. Okay. And so – and I'm like – and then – so we get off the train that takes us to this town. And we're looking outside. It's like, man, the sun is pretty bright out here. Let's get some sunblock. And so – and we get to the gas – to this little convenience store.
[00:33:21] And it turns out it's a convenience store with a beautiful view of Mount Fuji over it. It's that exact story you're talking about. We didn't even realize it because the story is that there's this hilarious and majestic view of Mount Fuji overlooking a convenience store. And Instagram people love taking photos of it to the point of which they are annoying the town so much that they're going to put up – that they're putting up a screen to block it.
[00:33:50] That's how much they – it's been a problem. So we're like, oh, my God. We are at this historic site, this historic, quote, unquote, convenience store. I can't believe we're here. And sure enough, like right as we're looking at this, a camera crew comes over to Emily and I and said, excuse me, can we interview you guys about this view? I was like, sure. And they're like, yeah, do you know what this is? Like, is this the convenience store? Like, it is. This is like a Japanese news crew through a translator is talking to us.
[00:34:20] This is the convenience store. It's like, yeah, we heard they're going to put up a screen to block the view. I was like, yes, they are. Do you know when it's going up? It's like, when? Like two days from now. Like, what? That's crazy. And they're like, well, are you going to take a photo of yourself in front of the gas station? And we're like, well, we don't want to be disrespectful. There's a reason why this town is so mad at the tourists. We don't want to be part of the problem. And the guy's like, no, you guys have been very respectful. In fact, why don't you go grab a photo? Give it to us.
[00:34:49] And we can actually use it in our news broadcast that we're doing on a story about it. I'm like, oh, cool. Well, that's awesome. So like, okay, so Emily and I will very carefully cross the street because part of the reason is that there are a lot of jaywalkers and Japan does not love jaywalking. So like that was one of the reasons I think they're annoyed by the tourists running across the street. So we carefully, we get a photo. You know, we take it back to them. We give it to them. They're like, thank you very much. And then weeks later, we get back to the States.
[00:35:17] Emily finds the news broadcast about this on the, I think it was a national like Japanese station. And we found the story. And sure enough, we're listening to the story. And Emily is sort of like translating because she speaks Japanese. And she's like, and it says, and you have many tourists here trying to get Instagram clicks. And they've cut up to a photo of me and Emily. It says, we are, we are, they hated, we were part of the problem. Yeah. We were. That was entrapment.
[00:35:46] It was entrapment. I feel like that's what like a lot of reality shows must be like, is they're like, oh, don't worry, just do this thing. So we were entrapped into being the annoying tourists. Yes. You've been very respectful. Please just take a picture. So that is my, that is my story associated with this town. They have now erected the screen, the screen. Okay. All right. Christian, let's do a science question. Jason White wants to know, who do you think is currently the most influential person in
[00:36:16] robotics? That's a great question, Jason. And currently most influential is a tricky problem because robotics is a very fast changing field. And there's a lots of different pieces. There are lots of different subfields within robotics. So people in different subfields will give you different answers. And there are lots of people who have been influential who aren't necessarily roboticists, but they influence the field a lot. And it's hard to know where this current push is going to shake out.
[00:36:45] Right now there's all kinds of, how are people going to use AI and robotics? What tools, how is AI going to last? In what way, what capacity? We don't really know. So I was thinking about how to answer this. And so I think I went with like, who would be, who has stood the test of time and was very influential to me and my subfield of robotics, which is mobile robotics, legged robots, the robots that walk and run.
[00:37:10] And I think that a good answer, a good answer here, and not the only answer, is a gentleman by the name of Mark Raybert, M-A-R-C, Raybert, A-R-A-I-B-E-R-T. He's a guy who I actually see at conferences. He's extremely famous. I have never once talked to him. I'm too scared. I'm too scared to talk to Mark Raybert. He's just that well-known. I'm so worried I'll sound like an idiot in front of him.
[00:37:39] It's irrational because I have a tie to this man. This man was my PhD advisor's PhD advisor's PhD advisor. So I'm like his great grandchild. And I can't talk to him. And he's easy to spot because he's always walking around. He's got a bald head and a bright Hawaiian shirt. Like that's how you know him at the conferences. And so my students have talked to him. I won't talk to him. And that's out of pure fear. So what is he famous for?
[00:38:08] So he started something called the Leg Lab. He started at Carnegie Mellon University and eventually moved to MIT. He does legged robots. He's well-known for it. So he, back in the 80s, he wrote a book called Legged Robots That Balance that is out of print, but someday I'll have to buy for like $400. And a lot of people assume that robotics is a series of pretty complicated equations.
[00:38:36] These are sort of like medium to light equations on the board behind me. Very complicated equations exist in robotics. His were even simpler than this, far simpler than this. And they actually enabled robots to bounce around on two legs and balance. And so this book, it's just these beautifully intuitive equations he came up with that for how to control robots. So that way they can walk and run and balance.
[00:39:04] And famously, he had these air-powered pogo stick robots that had a big air hose that would run to them. So imagine there's like a dome with two pogo sticks and that pogo sticks will push. And he can make these robots bounce around and do flips. It was amazing. This is in the 1980s. And he said he made a lot of robots like this. And so he's very famous for making robots walk around on two legs.
[00:39:31] And then in the 90s, he started a little company. You might have heard of it, Rob. It's called Boston Dynamics. Oh, yeah. Yeah. He started that company with one of his PhD students who graduated. So he made Boston Dynamics. So they originated the big dog robots and then later the littler dog robots. So in 2000, he started a software company that eventually got a contract to make Legged Robots, his real wheelhouse, so to speak.
[00:40:02] And it became in 2005 on YouTube, just without any ceremony, someone just released a YouTube video that he had edited of their gas-powered robot dog called Big Dog walking around in the snow. And then an engineer comes up and gives the dog a huge kick and the robot catches itself and stays up. Okay. Kick her around the world. Yeah. So you can look up just Big Dog, Boston Dynamics. You'll see the video.
[00:40:32] I don't believe in kicking any kind of dogs. Yeah. It created a movement called Stop Robot Abuse, which is only slightly chung-in-cheek, Rob, because it actually became popular to kick your robot. I kicked the robots, and I've had people say, don't do it. So I've stopped because I'm tired of making people mad.
[00:40:54] But so that's – so far, like that's like was his career in academia, started a company. So basically Boston Dynamics, who didn't know, it's probably like the most famous robotics company in the world. It's super well-known. And so he made the Big Dog robot. He – at least his company did. His company then worked on a humanoid, like a two-legged robot. And it was a full humanoid with arms and legs. And it actually walked remarkably human – in a remarkably human-like manner.
[00:41:24] And some of you are probably wondering, who's funding these robots? And the answer at the time was the military, the thing that people, you know, wonder about in like science fiction things. It's true. The Big Dog robot was supposed to be a robot pack mule. So that way – because real mules were hard to control and they thought a robot pack mule to carry material for soldiers would be really helpful in the field. It was a cool robot. Unfortunately, it was too loud. It ran on a gas engine, like a two-cycle engine.
[00:41:54] So you could hear that thing coming a mile away. So it didn't really – it didn't get fielded for a couple of reasons. Then the DOD funded this humanoid robot. You might say, well, for a robot soldier, of course. And the answer is no, no, not for a robot soldier. It – you can look up online. The robot is called Petman, P-E-T-M-A-N. And the robot was – and you can find some videos of it in a chemical warfare suit.
[00:42:18] And it's because the DOD, the Department of Defense, wanted a robot to test chemical warfare suits because by law, chemical warfare suits must be tested with a real chemical warfare agent and not just some other gas that's safe. So they want to test these suits with someone inside of them doing jumping jacks, doing calisthenics, doing all the things a soldier might do but without endangering the life of a soldier. So make a robot that can go inside of it.
[00:42:47] And so they have this walking robot that's only job was to test chemical warfare suits. It's crazy. And they made – later they made the Atlas robot, which is probably one of the most famous humanoids in the world, at least until maybe recently. It's incredible. And all that is the legacy of Mark Raybert and, of course, his students and people who helped him. Mm-hmm. And will you ever one day, Christian, speak to Mark Raybert? Someday.
[00:43:17] I got to make it my goal. Are you waiting for the right opener? Yeah, I think I see him. I'm not sure what to say because I could be like, hey, I'm your great-grandson. I can say that. And I could come with the DNA test. I don't know. But like I think that it's – like I want to have the right opener. Like what do I say to him? I talk with his direct students, and they were super nice, like my grandparents. I talk to my grandparents. But I haven't talked to my great-grandpa.
[00:43:45] I think I love to have something to show him that would impress him about what I've done. And that's what I like to do. Yeah. Well, when people maybe are nervous to approach you, what have they done that has impressed you? I mean honestly, it's just they come up and – I mean I don't need to be impressed. I think people can come up. It's me projecting. I see what you're doing here, Rob. This is therapy. I appreciate it. I'm loading more of my trauma on you.
[00:44:12] This is the trauma episode of Dr. Hubecki, the Ask Dr. Hubecki. And yeah, I mean people come up to me. It's like, hey, I just – whether it's for the show. I was like, hey, I'm just going to say I really like – I thought it was cool how you were on the show. Can I get a photo? That's great. And I'm sure I can just do that. It's like, hey, look, I'm your great-grandson on Jonathan's side. And it's like I've been working with robots, obviously. I like what you do.
[00:44:40] I would also kind of like – if I'm going to get to talk to them one time, I'd like to have some kind of ask or have something to build toward. And honestly, for me, I would love to make like the definitive video on how some of these robots work and put it up on YouTube. I know how the approaches to control that they use. At least, well, I don't know the exact details because that's proprietary. But I know the approaches. And I can explain them.
[00:45:03] And part of me wants to be like, look, I want to be – so I would love to have like a version of like – like on YouTube, I've explained how similar robots work. And I would love it if I could do one focused on this robot and do something like that. I'm nervous just telling you about this, Rob. Anyway, that's what I think. But maybe you're being a little too ambitious. Yeah. Yeah, just say hello. Hi. I'm Christian. Yeah. Yeah. Maybe that's it. Can I get you a soft drink?
[00:45:33] That's a little too subservient. I have to have some self-respect here, Rob. I guess so. Yeah. Anyway. But yeah. All right. That's Mark Raybert. There are many influential people in the Lego robotics sphere who are great. I think that just in my history, for a variety of reasons, Mark Raybert is one that stands out as very influential on my trajectory. Okay. All right. Let me ask you a question. Okay. Emmy wants to know, I want to ask Dr. Hubicki, what do you have for lunch every day? Every day. Well, let's start with Monday.
[00:46:02] On Monday. So, I actually have a pretty consistent lunch schedule where I actually block off an hour of time most days to cook a lunch. Like, actually cook one from scratch. And normally, it's like a pan-fried chicken with vegetables and rice. And so, like, I just have – it's a routine that I just protect. Like, whatever is going on that day, I'm going to cook. So, I have my rice is cooking in the morning.
[00:46:30] I set the rice cooker to finish around lunchtime. I will drive home sometimes to cook lunch. I mean, I don't live that far away from my house. I'll drive home to cook lunch and cook. And I will season the chicken, oil it up. And this is one day a week? I do this a couple times a week. I do this three times a week. Yeah. I'll do this. Wow. Yeah. So, it's just some time where I'll put on a podcast or audio book on my headset while I'm cooking.
[00:46:56] So, it's just something that puts me in a relaxed space where I'm doing something methodical while listening to something. That's what I'll do. Yeah. Okay. What do you do the other two days? Oh, it's chaos, Rob. Utter chaos. If I'm lucky, there's some leftovers. Or I'll run out to, like, whatever the closest, like, takeout place. It's just chaos. I will forget that I have to get a lunch that day. And I'm like, well, what do I do? And you think I would learn by this point, Rob, that I have to get a lunch ready for a given day. But no. But the other two days are chaos. Yeah. I'm pretty big on food preparation.
[00:47:26] I eat very similar meals almost every single day. I know my wife would hate it. She could not do it. I love it. It's great. I love it. It's just, it's something, it's, I don't know. What do you love about it? Describe your feelings when you're doing your food prep. So, what I love is, I don't have to think about it. You know, I don't need variety. Like, I've got plenty of excitement in my work. Look at what I'm doing right now. Okay?
[00:47:54] So, I tend to, like, go shopping on Sunday or Saturday, maybe even. And I end up going to, like, Sam's Club. And I end up just, like, I know exactly, like, the 11 or 12 staples I need during the week. And then I'm going to eat the same thing for breakfast, lunch, dinner, dessert, snack. Boom. Nice. So, I'm with you on that. Like, for me, it's like, the similarity is, like, wardrobe for me.
[00:48:23] Like, I have my one work outfit that I will wear basically every day. I don't want to have to think about it. And then, also, it ends up being your signature look. So, people recognize you from a mile away. You know, it's like people, people know it. Yeah, you have a brand that way. And it's one less thing to think about. I like that. I think that where my variety comes in is when we cook dinners. And Emily is very good at planning a variety of dinners for cooking. I probably mentioned before, she has this way of meal planning.
[00:48:52] Like, we'll have, like, a quarter cup of cream cheese left. And she's like, we will use this cream cheese in some upcoming meal. And she will structure the upcoming meal so that cream cheese will be used by the end of the week. So, you're always cycling through ingredients. And that sort of, that need ends up driving you to new recipes that then create new ingredients you want to get rid of. So, then the variety comes in dinner, I suppose, my lunch, which is pretty routine. Okay.
[00:49:18] Pre-Shalen says, Christian, can you please explain how you used one of the scientific methods to search for an idol when you were on Survivor? So, this, I think, unironically, is my proudest moment on Survivor was finding an idol. Not just finding an idol, but the fact that I describe in a sequence that I'm using what's called a breadth-first search for the idol.
[00:49:46] Because it means that I have this sequence that I can play in one of my lectures when I explain what a breadth-first search is. And it is always a hit in my classes. And, frankly, if I'm ever at a conference and I'm talking to other professors and it somehow comes up that I was on Survivor, most people think it's cool. Most people think it's really neat. But also, like, the few that are like, why would you do a reality show? Don't you have papers to publish? And it's like, well, thanks for guilting me, number one.
[00:50:13] But, number two, like, I can say, well, I got to do this. I can point to the breadth-first search episode. It's named after that. And I can say, oh, yeah, I got to explain this algorithm to millions of people. And they think that's cool. I've yet to find someone who didn't think that's cool. So, yeah. Any chance Mark Raybert cares about breadth-first search? You know what? There is that angle. I didn't know whether or not to bring up the Survivor angle with him or if that's gotten back to him. I actually don't know. Sometimes I'm surprised.
[00:50:43] How much of the field knows? Like, oh, yeah. Do you know Christian's work? It's like, yeah, kind of. Did you know he was on Survivor? What? But the – and I think that the breadth-first search is – in brief, it's a kind of search where you intentionally don't dig too deep, literally and metaphorically, in any given area. You look a little bit intentionally in a lot of different places. Mm-hmm.
[00:51:06] And so, this is how Google will crawl the internet to look for websites. So, imagine you're doing a Wikipedia deep dive, right? So, if you go on Wikipedia and you go to a page and once you just click the first link on that page and look at it, okay. Then you look at the second link on that page. You look at it and it takes you to different topics that are like maybe slightly in some variety. You're getting some breadth. The opposite of that is a depth-first search where you would click on a link.
[00:51:36] Then you go to that link. And you click on a link at that link. You click on a link at that link. And it takes you deeper and deeper and deeper into the topic. That would be a depth-first search. It turns out that breadth-first searches are really good for finding lots of stuff on the internet. Depth-first searches are used for other things. Can you tell me the word – it's not breadth-first. It's breadth. Yeah, D-H. Breadth-first search. Yeah, it's breadth-first search. B-R-E-A-D-T-H for those who can't either do a microphone.
[00:52:05] So like breadth versus depth are the two things that are opposite each other. So you want a really broad – think about it as broad search. Why is it a width-first search? Because computer scientists name things that computer scientists want to name. It's not the worst name in width-first search or wide-first search could also work. But yeah. Yeah. Is there anything else that's breadth?
[00:52:34] You talk about you want a lot of breadth of a topic. You know, let's say how broad – it's not that uncommon to work. So you have a subway that you can get your sandwich that the breadth of options are incredible? Well, then you have literal bread at the subway. So that's really a poor choice of words. So bread and breadth. You wouldn't want to do that. So – but that's what it's called. It's in every algorithm's textbook. You can find it, right? And so that's good for finding an idol because it could be any number of places on an island.
[00:53:03] But that was not my first approach to finding an idol. I think that's less well-reported. Back at the beginning of the game, I tried the opposite approach, which is called the depth-first search, where you dig and you look in one place a lot before you ever move on. And that's not very efficient. You're likely to lose out on finding the idol. I, for a variety of reasons – this is the mentality, Rob. Trauma. Trauma. Trauma will come back. I was trying to find an idol like everyone else.
[00:53:31] And Elizabeth on my season very thoughtfully noted that around our beach were all of these little wooden masks. This was like a theme of our season, that David versus Glythe had these sort of Fijian masks. They little look like little sort of like Easter Island heads, a little bit like that, but they were more Fijian kind of tiki things. That was the sort of the artistic theming of like what our hidden immunity idol looked like.
[00:53:58] And they're just masks, these big wooden masks that were probably one or two feet long. They actually use them as part of the shelter. And they're just around. And they're like – and it's one of those things you just kind of ignore that they're there. And so like are they clues? Like are they clues? And it looks like the same thing that's at the water well that covers the water well. So we're like – I became convinced via some reasoning via Elizabeth kind of tipped me off this. Like these masks, they're so important.
[00:54:22] And in fact, she was convinced that more were showing up on the beach as a hint that the masks were important. So I actually went and stole the masks and hid them off in the forest to see if any new ones would show up. That way I didn't have to like keep track of where they all were. I ended up stepping in a fireman mound in the process and hiding them. But anyway, it took me to the one place that I thought the idol would be, which would be the water well. And so I was literally digging around the water well for a while. And Davey already had the idol on my season.
[00:54:53] And suddenly tried to tip me off that maybe I shouldn't be looking there, but I was convinced. So I kept looking at that one place. And looking at that one place as far as you can is called a depth-first search. That's the opposite of a breadth-first search. So breadth-first or broad or wide-first, whatever you call it, Rob. Good idea for idols. Depth, deepness, not good. Yeah. Now, have you ever anecdotally, like with other players, have they also said, okay, that is the way to do it?
[00:55:21] Or did you maybe just get lucky that you happened to, in your breadth-first search, run into where you were supposed to be? 100% I could have gotten lucky. To be clear, the fact that I found an idol while doing this is not a proof. But, like, in general, if you think about how hard are they defined in reality, they're not generally buried. Like, these days, you have your beware advantages. It's just a note that's hanging out of a tree or kind of thing.
[00:55:50] You want to look around a little bit. But they're not that – but, like, the danger is that you're not even in the right place looking for them. So if you do a cursory search – so the way I would do this, I would literally count on my fingers. That I was like, okay, I'm going to be here in this zone. I'm going to call this zone, this sector. I'm going to be here for 45 seconds. And I'll count down from 45. I'm counting on my fingers so I don't forget. And so I have to 45 seconds, and then I've got to move on to the next place. And it happened to work nice in that one particular. Maybe I got lucky.
[00:56:19] But I think that, in general, that's probably a better approach to go because they don't – there was a time where they buried them deep in all over the place. Like, I think Russell Hanson, Season 20, had to really dig for his. And they don't do that anymore. Well, you're supposed to have a clue, and he just started digging up the whole island. That's true. That's true. It's a very different time. Okay. All right. Let's go to – back to a science question, okay? Okay.
[00:56:44] Will wants to know, what are the realistic chances of AI-powered robots becoming too aware, such as if they were tasked to better the Earth and they come to the decision that humans are the problem? Yeah. Yeah. So that's a classic science fiction premise. I think that most famously, that was a science fiction – in the book Robo-Apocalypse.
[00:57:09] It specifically was that situation where there was some algorithm that became smart and then it would come up with the solution – it would come to the conclusion that humanity was a problem for all of life and therefore humanity has to go or be put in its place. And there are variants on that in all science fiction. And it's an incredibly unlikely thing in any kind of near future, in my opinion, because – let me explain my reasoning. Because I understand that people see a lot of news headlines.
[00:57:39] There's all kinds of news out there like, oh, this new algorithm, this new version of ChatGPT is smarter than a PhD. It's smarter than this. I'm not sure – well, I'm curious. You probably heard a lot of these things. You cover them on News AF a lot of the time as well. You know, I feel like I have a little bit of like a foothold in learning about AI and generative AI. So I do try to stay on top of the news, not just for News AF. So what sort of – what has been your impression in general?
[00:58:07] Like from your staying on top of it, what is your impression of the sort of readiness of these algorithms to become aware and then decide – what are you thinking? What's your thought? You know, I find that all this AI technology, it's incredibly powerful. It's amazing what it's able to do.
[00:58:25] I do think that there is a legitimate question to be asked about, you know, who is in charge of like what AI can and can't do. You have the people who do open AI, which creates chat GPT, and they've, you know, raised incredible amounts of money. And they've done really just amazing things and are really like at the cutting edge of things.
[00:58:52] And, you know, there are certain things you can ask chat GPT and it won't tell you and there are certain safeguards. But that the proliferation of this technology I do think is a legitimate concern because that it really comes down to who is the person who is programming the AI and what it's able to do. And it can unleash all sorts of things like in the wrong hands. Yeah, I think that's a very viable take, Rob.
[00:59:22] And, in fact, you say it's incredible what it can do. And the way I typically finish that sentence when I'm talking with you or other people, I say, but it's unclear what it can reliably do. Now, you're taking a different tack as like what should we trust it to be in charge of, right? And I'm sort of a slightly different version of that is that it is not currently a reliable technology, these generative methods to reliably get things right.
[00:59:46] And so when it comes to these – so we all know, probably have heard of the idea that these large language models, these chat bots can, quote, hallucinate, right? But it's basically they just start making things up out of whole cloth and they kind of go on a thread. And there isn't a principled – at least I've not been convinced by any principled solutions to this problem. I think it's – I think in a way it's fundamental to how these things work.
[01:00:14] They're kind of BSing through the things that they've seen before. And they can do it really impressively sometimes, but then sometimes it just goes off on a tangent of like what are you thinking? So these current methods – and it's because on a fundamental level – and this is my read of it – is it does not have a meaningful model of the world. It has – all the information it has about the world, it is learning through the language that is being fed to it.
[01:00:44] So how much information is being encoded in that language is the fundamental limit on what the algorithm can actually understand about the world. And I mean there's a reason in science it's not just enough to – when you're trying to discover something new or trying to push the boundaries of what we can do to just read a book on how a thing is done and just expect it to work. We have to do experiments.
[01:01:06] We have to set up experiments like, okay, according to everything I understand about this theory and this theory, if I put them together, I should – this thing should happen. But I don't know because I need to test it against reality, and fundamentally that's what it needs to do. So there's limitation into what these things actually understand about the world.
[01:01:26] And so for me, if you were to have some algorithm that was somehow magically powerful enough to make inferences that's going to – what's best for the world, right, how is it going to know what's right without experimenting with it? So it's not just something that we would ever trust with this current method of technology to do anything sweeping like, oh, make the world better for everyone.
[01:01:50] That's a – it's one thing to maybe listen to its suggestions like, hey, let's try this thing. What about this? What about this? What about this? And we say, hmm, is that a good idea? Let's try it in a small-scale experiment, see how it works. But the idea that it could have this megalomaniacal understanding and plan of how the world works to operate, that's not this current level of technology, right?
[01:02:15] So that's me talking about where we are now and why I think we are nowhere close to that right now. But then there is sort of the science fiction element, which is what if we had a much better algorithm that was built with a lot of knowledge of how the world works from doing experiments and it's putting together – actively doing experiments in the world, understanding the physics of the world, understanding the social dynamics of the world of what needs to do.
[01:02:42] How do we keep it from doing something terrible? And that is a really hard question because it requires us to tell it what we really want. We don't know what we want as a species. We don't – I mean we all kind of just do our own – we all do our own thing. Some of us have plans. Some people want to organize. Some people want to become leaders and things like that. But like we don't have an equation that's going to describe what – at least anything close to correctly right now as to what we want.
[01:03:12] Like do we want to have as many people as possible that are happy? It's like, well, what are the implications of that, right? This thing will go to whatever lengths that it could to do that thing. We call this the – some people call this the paperclip problem, okay? That like at the – what if you took an all-knowing robot and all-knowing algorithm and say, make me paperclips, and then it turns the entire earth into paperclips via whatever technology it can. It gave us what we wanted in a way that was for it. But at what cost? At what cost, right?
[01:03:41] And so that is a big theoretical problem in the future, but it's also a real technical problem right now in terms of how our algorithms work for – well, I'll just go into some 90. Robots, okay? We have algorithms that will train robots in a simulation environment to try to do something. But we have to tell the algorithm what is good. We call it a reward. What are we rewarded for? If it's a walking robot, we can tell it to go forward, right?
[01:04:11] Oh, go forward. Then the algorithm will come up with some crazy thing that flops the robot all over the place and smashes it in the ground as long as it's moving forward, right? And then we'll be like, oh, no, no, no. Don't hurt yourself. Okay. Then we'll do something else but also crazy or not think that's something that you want. So there's a whole – one thing a lot of us robotics engineers will be really good at is adding little terms to these equations that say don't do this, don't do that, do this, do that. At that point, who's doing the designing?
[01:04:38] In a way, we humans are still doing the designing. This is just a tool. So the question of what do we want is both a now problem and a future problem that we're going to have to be able to solve if we want some algorithm to actually do some really good work for us. Okay. Let's go to a question from Amal N on Blue Sky.
[01:05:03] Does it matter if there are alien forms, sentient or non-sentient, or if there is no means of interstellar travel? Will there be interstellar travel as a follow-up? So basically the question is if they're aliens but we can never get to them, does it matter, right? So the tree falls in the forest, right? Right. And the short answer is yes. I mean if we actually knew about it, right?
[01:05:32] And if we knew about – if we knew about an alien species, intelligent or not, and we just knew about it, then that changes – that gives us a lot of information about life and how life works. What do we know about this alien life form? What do we know? So presumably we will never be able to get there to talk to them. Let's assume they're intelligent. Let's assume we figured out that they're intelligent species out there but they're so far away we'll never get to them.
[01:06:01] But if we know about them, then clearly some light speed signal has gotten to us. And we can learn something from that. I mean we learn a lot from life that we wouldn't otherwise – that we don't even necessarily need to communicate with to learn from. I mean we also like – we learn all kinds of things about species that aren't even intelligent. Like I try – I forgot to look this up.
[01:06:27] But like some of the DNA – the DNA editing technologies we use come from these weird microbes that we say, oh, this has this weird thing. Can we use this? Yes, we can. We get inspired by life all the time. And it would be so interesting. Think about all the things we learn, not just scientific but historical, sociological. If we just had – there's this alien species out there too far away for us ever to reach but was beaming to us their history.
[01:06:53] We get all their silly – whether it's curated or not, whether it was just their – whether it was their encyclopedia or it was their Dondi. You know? Wow. Their Dondi? Could you imagine? I'll tell you that would be – We thought we know reality TV. Little did we know. But like we would learn so much just from getting even the slightest information about these aliens. Are they – is this a bacteria?
[01:07:23] What can we know about this bacteria? Even if we just look at the – if we're just doing like spectral analysis of the planet, we can understand what is it metabolizing? What kind of elements are in the atmosphere? That tells us a lot about what life can be. And if there's some kind of civilization, my goodness, to see how another civilization was playing out, even if we never got to see them, even if we could never send a message back and become pen pals, we could be viewing them as a time capsule. They could already be dead. They could be dead.
[01:07:51] And we were giving – we are seeing their time capsule sent from the past that we're only receiving now. And we learned so much, even if we never got to meet them. Christian, have you any thoughts about the East Coast New Jersey drones? Oh, that nonsense. Sorry. I think it's nonsense.
[01:08:15] I think – I mean there might be some drones, but like there's all – I'm a big time skeptic of these recent UFO phenomena. I haven't talked about it much with you, Rob, but I do – when you dig into these individual cases with enough detail, you start to see things that are like that's the moon. That's Orion. That's a bird. That's a bug.
[01:08:42] There's actually a really interesting – I'll tell you, if you – it is a thankless job to be the woman or man who has to figure out what these things are. It's thankless. Like people will put a cool video out of like, oh, look at this light in the sky. It's so interesting. It's interesting. But then someone has to figure out what it is. That person is not paid. There's no government agency that pays them. There was a government report, but that's – I'll talk about that later.
[01:09:08] But like it was – but like someone to really dig into these individual cases, it's a lot of effort. And normally it's just individual amateurs who are – have a lot of knowledge of typically photography who have to say, okay, this is this shape because the lens and the camera that you're using tends to capture motion blur in this way. And so they put out all this effort, often will make a really interesting YouTube video about this phenomenon.
[01:09:36] And then people who just really, really want it to be an extraterrestrial craft will descend upon them and call them shills or call them idiots. And I feel for them. I feel for them. They put so much effort into this, and it's beautiful. I think that there's a – one I'd recommend if you could find a few videos. His name is Mick West, M-I-C-K West.
[01:09:59] And there are some really famous UFO videos, things like pyramids, trails of pyramids in the sky, where you learn a lot about photography and how these are lens – these are lens artifacts. And it's fascinating. There are – some are really cool – and sometimes it will be like there will be a video from – released from the DoD who doesn't spend a lot of resources digging into what these things are. And – sorry, I'm going on at length.
[01:10:27] But if you dig into the details, there's sometimes – they're often really just mundane things. Something that looks like that's really far away and going fast can actually be something really small and up close moving slow. I mean a bug that goes across your camera lens can look without context like it's a spaceship going fast far away. And that's what a lot of these things are. Anyway, I'm sorry. I didn't prepare for that question. No, that was a Rob original, and I just threw it at you.
[01:10:54] But a lot of people were reporting this stuff. So I definitely understand like these videos. But do you think that people were just seeing other things like airplanes and stuff like that? I think so. I mean – and to be clear, I want to say if you saw something that you can't explain, that's very interesting. I would not take away from your experience of seeing that thing. I think that what happens is there becomes a bit of a mass hysteria once there's a couple of things.
[01:11:23] And then everyone goes out and starts looking, and it's easy to see lights blinking in the sky and be, what is that? You had – I forget which Maryland politician it was. I'm from Maryland, so I shouldn't know who they are. But like who's looking – look at those drones in the sky. And they happen to be in the shape of Orion and the stars of Orion. So you have to – it can be easy to understand like, oh my god, there's so much stuff happening. There must be something to it.
[01:11:51] But you have to break down all of these cases individually and tediously to see if there actually is a signal. Otherwise, you can get swept away in the mass hysteria of it. And that's super important if you want to maintain a level head over something that is so potentially mind-blowing if there were aliens. But there's so many favorites and retweets to get. But yeah, there's that. And I truly believe there are a lot of people who – I get it. I loved aliens when I was a kid.
[01:12:20] I love the idea of there being extraterrestrials visiting the earth. I would buy books about it. I would read them. That would be – we would learn so much if we had even some information about things if they're real. But we have to make sure – do the really hard diligence of digging into every case and seeing if they're real. And I'll tell you, it is a thankless job for the people who do that. Yep. You're making a lot of sense.
[01:12:47] But I just can't help that aliens come and go to the White House and meet Trump. Like it makes too much sense that you can see it already. It's not even hard to imagine in your mind the aliens at the White House meeting with Trump. There's fan fiction in that for sure. And I think that that would be – I guess I can – there was such a big swell of these things.
[01:13:16] Like you can imagine. Like you can sort of extrapolate in your mind these things happening, right? And so I get it. I get it. When I went to DragonCon, that conference that I talked to you about, it's diversified. It's lots of – there's comic stuff. There's blood on the clock tower stuff. There's science fiction stuff. And there was a skeptics panel. There's a specific – a skeptics track where there are all kinds of topics. And last year UFOs were all over the news.
[01:13:43] Like so I went to the skeptics panel on UFOs and it was packed. I could barely stand in the room. It was so popular. People wanted to know more about it. So I get it. I get the hype. I just don't buy the hype. Christian, you mentioned that they have blood on the clock tower panels at DragonCon. Miranda says, Christian, I want to compliment you on your production work on blood on the clock tower. It's been so fun to hear you talk about it in your occasional Twitch streams. Any cool updates on the editing process?
[01:14:13] What's your dream for how to continue to level these up? Now, we have brought some blood on the clock tower games with survivors into the universe. And Christian and Steven have been working very hard behind the scenes to make that happen. So a nice compliment from Miranda. Well, thank you. It is. Thank you, Miranda. And I only feel slightly guilty that I chose this one to read on the air.
[01:14:39] But I think that that's – it's been a wonderful ride. And thank you, Rob, for supporting this effort. It's been – it was an idea of – so for those who don't know, Blood on the Clock Tower, we talked about it before, is a social deduction game. It's a very popular board game trying to find out who the baddies are and the goodies are. So you're trying – so a good evil team are trying to work against each other. It's really a lot of fun. I can talk about it for a long time and have. And Steven had the idea.
[01:15:07] Steven Fishback had the idea of bringing this to YouTube playing with survivors. Other people have done this on YouTube. There are other – if you're interested in it, a channel called No Roles Bard plays a lot of these games. And it's like, hey, let's play it with people who play like deductive games or social strategy games in real life. So we did it with survivors. The challenge is it's a game with 10 players thereabouts. At least we play it with 10 players. You can play it with fewer.
[01:15:32] And you have to film everyone's camera angle and you have to film them interacting with other people in different breakout rooms. So it created this real massive editing challenge to sync everything up. So it actually has been a fun summer – it was a summer project, believe it or not, at the beginning, Rob, where I did a lot of coding to basically come up with an algorithm that would automatically cut up all of everyone's individual camera feeds and put them onto an editing timeline for a real editor to actually edit a story.
[01:16:02] So I'm like the – I've never been more excited to be a pre-editor in my life, Rob. Like the guy who does the syncing for reality shows. In a way, like – honestly, Rob, putting this together did feel a little bit like putting together like a real reality show. Like what storylines have to make the final cut? It's interesting. Because there's so many conversations that happen in the mini rooms that you have to put together.
[01:16:26] And then – so not only do you have to know which camera footage to use but also who was in what room when. And so it is a daunting task. Yeah. And so basically I turn to science wherever I can. And so I turned to solve this problem. And there are always like more – and it really just tingled the synapses in the roboticist in me, which is there's some level I'm like here's a task I have to do. How can I get a robot to do this instead?
[01:16:55] And so like – so I'm trying to add more and more features so that way when we get it to the real editor, the real editor has less tedium to do. And they can do more art. They can do the art of editing the timing of the cuts together so all the tedium is gone and all the love of the art can be there. So that's been fun. And so I'm trying to add more features like to automatically lay out the room in certain ways. That's fun details.
[01:17:20] But it's just more – I wanted to sort of give you the fun and the passion of what it's like to do a project like this and what that process is like. What comes out on the other side because some pretty fun videos come out. Okay. So then another Blood on the Clock Tower question from Zach. When you play Blood on the Clock Tower, what are the social cues you look for? This is a good question. It's a great question. It's a great question. And I think this might be Zach Wurtenberger. I'm not sure because we play a lot. I wonder. I wonder.
[01:17:50] It might not be. I wonder if it's him. But we played together. And the short answer is I don't use a lot of explicit social cues and reads. Like I think that people like to look if people are – I do some things, but like I'm not the kind of person like is the person looking away? Are they looking down? Are they drinking their coffee? Those kinds of reads, if there's an ability to read those, I'm not the guy to do it.
[01:18:16] I'm not doing the thing where I – these are the tells that people do at the poker table. Less that. I think what's more interesting to me is like does the logic of what they're doing make sense if they're good or if they're evil? You know why – and then within that context, you can ask them questions that probe whether or not – which one of those things is true. And when those things happen, you're like, okay, then you can start picking up on some social cues. Like is this person talking a lot or a little?
[01:18:46] That's kind of a classic one. A lot of times the evil players will want to not talk very much so they blend into the background. We see this in the traders. The traders, I – I'm assuming you've been keeping up with the traders. I've been very much so. I've been podcasting. Sure. Oh, and you had RGP traders as well. That's always quite an endeavor that you put together. It's really cool. I mean like you see this in – I mean if you're watching the traders, some people rely really heavily seemingly on these social reads.
[01:19:14] Like this person was swearing a lot and they – people do that when they're lying. And I feel like those things can really easily lead you astray. So I try not to do them. Okay. So is there – what do you look for? I mean like seriously like I'll – like you have to go to the point where there are decisions that people are making, decisions about who they talk to, decisions who they vote for. And why would they do that? Or when they talk to you, they can choose to tell you some pieces of information and not others.
[01:19:43] And you can just kind of just logically check that. It's like, well, if they're good, does that make sense that they would do that? And that gives you a context for which you can actually believe them or not. I don't know. So I tend not to be like – to look for like these stereotypical traits. I didn't do that in Survivor either. I did notice like the things that were more informative on Survivor when I found someone was going to blindside me, which I didn't act on. I should have acted on more.
[01:20:09] But like what it was like when people were telling me things that didn't make sense. It doesn't make sense. And if you're telling me something that doesn't make sense, it means you're probably not on my side. And it also means you're lying to me, so I'm probably the target. Like that should have been – like that's a thing that you should lay out step to step to step. If you're going to play a game like Survivor, right? Like why would someone tell me this if I was on their side? It's because I'm not. And if I'm not, you're probably on the menu. Okay.
[01:20:39] Wow. Now, Anna asks, Christian, I loved watching you play at the beginning of the season. Did you have any predictions about who might end up winning the whole thing? So yes and no. Yes and no. So I didn't – so yes and this – I should say no and yes. No, I didn't have predictions as to who would win. However, we are blessed, Rob, as now when I was played in the 30s of seasons,
[01:21:06] we had a lot of data to go off of on previous seasons of the show. So we've seen lots of instantiations. And you had a podcast for a long time, Rob, with Angie Contz, as I'm sure you remember. It was a very interesting podcast. I think that she discontinued them because casting had changed over was my understanding. But like basically Angie had a podcast with Rob where she would track – she went through all the seasons
[01:21:34] and she works like in marketing and was looking at players as sort of like marketing stereotypes or like TV trope-like stereotypes. What kind of character type you were. In fact, it was called the Contz character type system is what she had developed. And there are lots of ways that people try to break down Survivor in some kind of systematic way. There are people who like to do edgic charts. There are people who like to do all kinds of things. That kind of stuff interests me less. But the Contz character types, I thought there was something there.
[01:22:03] Because there is something in how casting is looking at us to put on the show. Like they want to look at you and say, okay, what constellation of personality traits and look do you have that's going to read on TV? And that can read into an archetype, right? You know, so I think people would say Rob and I are kind of in some ways a very similar archetype. She would call us, both of us, the know-it-all archetype and things like that.
[01:22:32] Because we're people who knew things about the game or about intellectual endeavors and would be fast talking and or witty. Like those kinds of traits, right? Which would be very different than say like the surfer dude. And enough of these things have passed by that you can sort of start to see trends into which of these types do well versus others. And she actually brought in a real like computer scientist to do a machine learning analysis on a lot of this data and pulled out some real interesting signals.
[01:23:01] And his name was Sean Falconer. You can still find Sean Falconer's blog posts, I think, on seanfalconer.com. I think it's where you can find maybe blogabout.com. You can Google it. You can find it. Really interesting stuff. He basically took it to another level. But I took it from a very basic level of which of these types tend to do well. And so when I'm leaving for the game and I'm looking around at all the people at Ponderosa, which is where you go before you start the game, I'm like, okay, who's the surfer dude? Is there a surfer dude here today?
[01:23:30] And Alec. Alec is definitely the surfer dude. Now, keep in mind, we are not talking. We can only kind of look at each other. And also, we're not in our kind of like survivor outerwear yet. So we're kind of – we're just wearing the normal people clothes and not like our costume, if you will. Like I'm not wearing the robot shirt in Luke Ponderosa. Like I remember looking at Elizabeth. Elizabeth's not wearing a cowboy hat, which was one of the giveaways. When I was like, oh, I didn't know how to read her in Ponderosa. Then I see her in the cowboy hat. I'm like, oh, that makes sense.
[01:24:00] I remember thinking that this one guy who's wearing like a checked shirt and it's sort of like this kind of – it's like a kind of a stuffy check – not like your check shirt. It's like a stuffy or checked shirt. I was like, oh, that guy, that's a farmer. That guy is like a farmer. It was Mike White. Very much not a farmer. So that was Mike White. So I got a lot of things wrong. But like I was – I noted three people who also happened to do well in this sort of character archetypes. One was Alec. The surfer dude tends to do pretty well.
[01:24:28] I mean that's like your Ozzie at one point did – he would do really well sometimes. That's your Tyson. I think Tyson even counted. I forget. Yeah. And also Devin from season 35 was at the time surfer dude. So he was there like, okay, that's a guy I might want to work with. The good old boy is another character type, Nick. Nick. And he won the season. Spoilers. It's like he does – they tend to do pretty well. And the other was the character I called the Erin Brockovich type character.
[01:24:59] And that was Allison. And so I was like I always wanted to work – I wanted to work with people who I think would probably work with me but also were destined to go far. And therefore, I was likely to go far with them. And to be clear, I think the mechanism for this is not magic. It's just when casting is trying to put together who goes on the show and to give us all kinds of ways to try to understand how we think and how we respond, they know our personalities pretty well.
[01:25:26] And so that's one of the few things you can do when you know nothing about these people to go off of to give you a slight bit of an edge. So did that help? I don't know. But that's what I did. Okay. No, that's a pretty good explanation. I never heard of anybody – I think you might have been the only person who ever brought it into the preseason and said, I want to align with the people that are going to go far. I guess so. That would be my goal.
[01:25:49] I mean honestly, the issue is if you're going to come out as a new player, let's say you're going to be on Survivor 49 or whatever, can you use this today? Well, Angie does not do this anymore in part. I think she put out a post saying like casting has changed and therefore it shakes up the entire system so it's hard. So the old categories don't really matter as much and that makes sense. They kind of changed over how casting worked in the last few years.
[01:26:14] And so – but like there is something to be said that these are not just random people plucked off the street. We are television characters. You know, if you looked at me and like – put me on the island and you actually met me and I'm like, oh, I'm a grave digger. Wow. Like that would be cool. But like it would be a weird TV match. You know, like would they cast me if I was a grave digger? I think it would be a little less likely. It would be a little strange.
[01:26:43] So there is – reality TV is almost one of the few places where it makes sense to judge a book by its cover. Anonymous says, Christian, what drives you to be a good professor? Well, thank you, Anonymous, for the compliment and – or at least the aspirational compliment. I want to be a good professor. I mean whenever it comes down to like talking about science or any complicated topic or any of this stuff, right?
[01:27:10] The reason I do it and I want to do it well is because I think it's cool, the stuff I'm talking about. I really like it and I really want my students to like it too. Like I want them to experience that revelation and empowerment of understanding this topic. And if you approach it from that, everything else just clicks into place. Like if you have a student – like they're asking a question. It can be easy to get annoyed at certain kinds of questions. Like I've already said this. You weren't paying attention.
[01:27:39] But no, you take it from the point of view. It's like, okay, no, no. I want to make sure that this person has that same kind of revelatory experience I have when I learned this. It instantly puts you in a different state of mind. It's like, okay, all right. At what point do you understand this part? Okay, if you understand this part, now do this. Now do that. Now does it make sense? It puts you into a helpful mindset. And also the fact that you're conveying something that's really cool to you, that's just an infectious idea.
[01:28:07] Like when you're excited, when you're enthralled by a topic, you know, it comes across. At least most people, it comes across. And when you love it, when you're having fun, other people often have fun watching you. It almost doesn't matter what it is you're doing. I mean, you talk about this on – even for Survivor seasons, Rob. Like you talk about like Johnny Fairplay. He's doing villainous things in like his seasons. But he's having so much fun doing it. He's enjoying himself.
[01:28:35] So Johnny Fairplay has a lot of fans who enjoy watching him be his villainous self. Now, I would hopefully not be a villainous professor. But the same thing applies that just the passion for wanting to have people understand why it's so cool, that often drives a lot of us, not just me. All the professors I know who love teaching drives them to do it too. All right.
[01:29:00] Christian, let's go back to one of your favorite topics. Mike Skall says, what is your favorite song on the original The Mole soundtrack? I always enjoyed Execution. Yes, on the original The Mole soundtrack. I love the original The Mole soundtrack. So you can find a podcast a couple years ago. It's wonderful. Wonderful.
[01:29:26] So when you covered the Netflix season one of The Mole, I popped on for a podcast with you guys. And I revealed the slightly embarrassing fact that I own the original Mole soundtrack on compact disc. And it was out of print. So it cost me $50. And so I own this. So I love the song where the execution is happening. It's great. It's great.
[01:29:52] I thought this was also a great jumping off point for like reality show soundtracks in general. Like which ones are awesome? And the Mole, original Mole one, I think the children say it slaps. It slaps. Do they still say that? I don't know. Maybe at one point they said it slaps. I don't know. Fire. I'm about six years behind in my lingo. I'm slowly becoming the professor that makes really, really dated references. And they politely chuckle in the audience when I do it.
[01:30:21] So it's great. They rebooted the Mole in season five in 2008. And they kind of did a sound alike kind of soundtrack, which is also pretty good. It's also pretty good. And now the Mole has its own kind of more modern and very minimalist kind of soundtrack. So I love the Mole one. Survivor always has good soundtracks. Rob, do any reality show soundtracks stand out to you?
[01:30:50] Well, I mean, when we talk about soundtracks that I feel like that Survivor's music is so iconic. And I remember like downloading. I don't know if I ever had the CD of. But I remember like having like on LimeWire. Like I had downloaded all of the songs from the original CD that was the official soundtrack to the first season of Survivor. And my favorite track.
[01:31:18] Like I feel like all that music is iconic. My favorite of all of those is a track called I Can See It. Track five. Track five. Okay. Okay. I feel like I got to look this one up. Can you give me the essence of it? You don't have to necessarily sing it. Like you saw it. You heard it a lot in seasons one, two, three, but it was a lot of like. Like it's beautiful.
[01:31:48] I would start to like get a little misty eyed if I listened to it. They should bring that back for season 50. They should bring back a couple of those little tracks. I don't know if they can. I don't know what the rights issues are. But yeah, that was like really, you know, that they would like. And they would like have it like with like the scope of like the epic background. And so a lot of times it was not a lot going on in some of these old classic survivor episodes. They had a lot of like scenery shots.
[01:32:19] But, you know. Yeah. I mean, the more modern survivor is scored a little bit more like an action show. Like especially a tribal council. You know, I remember like by the time, by the later season when they used the original soundtrack, the tribal council, like it has this sort of aura, this mysteriousness to it. Which started to lose a bit of its meaning when they're voting out like Melinda in season one. Like no offense to Melinda. But by that point, we've heard it a lot of times. Right.
[01:32:46] So they kind of turn it into a little bit of an action score, which I think is totally appropriate given the fact that like the show is more of an action show at tribal council. I've always maintained that like season 16 is like the – it's like – it's almost like the Wrath of Khan for Survivor. And that like the Wrath of Khan – so the Wrath of Khan in Star Trek was the Star Trek movie that was super popular. It really popped off as children once said. I don't know what they say anymore. But like people love that one.
[01:33:15] And it was a big action show. Like there's so many blind sides in – sorry. I'm mixing my metaphors. Sorry. Star Trek II, the Wrath of Khan, was much more of an action movie than Star Trek I, which is the motion picture. So motion picture, Star Trek I is much more of a science fiction movie. Very slow paced. Very much more 2001 Space Odyssey. Star Trek II, the Wrath of Khan, is much more of an action movie. A very good action movie. Well – great script for an action movie.
[01:33:44] But it was super popular. So I feel like so many of the Star Trek movies that have happened, especially in the last handful of years, have been trying to basically recapture the Wrath of Khan as a result. Survivor is one show through maybe 14, 15-ish seasons. And season 16 comes around. The merge happens. And it's such a transcendent experience because there are so many blind sides all in a row. They love it. You're losing Ozzie. Sorry, some spoilers here for season 16 of Survivor.
[01:34:15] You're losing Ozzie. Eliza's making crazy faces. You know, you're using Jason Siska. Eric's getting voted out with his – after getting up his immunity necklace, which they never brought up again, by the way. You know, Eric makes a sacrifice, not unlike Spock in Star Trek II. He does. He does. And just people – and I feel for that, man. I have such affection for Eric. It gets brought up all the time.
[01:34:43] And imagine having to relive that for years. So I – Eric, right here, man. But like the – but like – but the show loves the blind sides. So the show is almost now structured after that point to find more ways to create blind sides. All of the new era twists are ways – they're trying to come up with ways that people could somehow be blindsided by shot in the dark, by there's not enough votes. They want the blind sides. It's almost – the blind sides are now the name of the show.
[01:35:12] That was never the ethos of the show for the most part in the first couple seasons. They happened. So that's my parallel between Wrath of Khan and Survivor Micronesia. So – yeah. So – but my point is like – so the score is now scored like an action soundtrack. And if there is or was a time though to bring back – I can see it. I know what you're talking about. That's – if there was a time, I would love to hear that because you could slow things down. You can see it? Yeah.
[01:35:41] I can see it happening, what I could say. I mean especially with 90-minute episodes, you could let some of these scenes breathe at important moments. I think that could work. Okay. Dewey wants to know, do you think humanity will be able to colonize the moon and Mars? If so, how do you think it's going to play out with the geopolitical tensions being as they are currently? Well, I would not worry about the geopolitical tensions as they are currently because it's going to take a very, very long time.
[01:36:10] I hate to be Johnny Raiden cloud on a lot of these sci-fi topics. I got to come up with something while these times like, oh, actually, Rob, this is happening tomorrow. Listen, Christian, I know I keep telling you to watch For All Mankind. I know. I've already seen it happen and boy, there are tensions. Trust me that there's geopolitical tensions when it happens. Oh, I'm sure it will when it happens in real life. But whatever those tensions are, they probably won't look like they do today.
[01:36:35] I mean, the moon is one thing because the moon is closer to Earth. And that – because the key here is distance and supplying this – and supplying things. So the moon I could see potentially because it's closer. And why? They're going to need supplies of food. They're going to have to find a way to make food. Or if things go wrong, people have to come and resupply them or send in new people or rotate them out. So you can get to the moon in a couple days.
[01:37:06] I could potentially see that if it's worth – if it's worth the actual cost of going there. It could be a staging – it could be a staging zone for going off to other planets because the gravity is less. Therefore, the gravity well is smaller. And therefore, it's cheaper to launch from the moon than it is from Earth. But it's still so much infrastructure you have to put on the moon. Mars is a completely different story. I mean, yes, there's an atmosphere there, but you can't breathe it. It's cold. It's so far away.
[01:37:36] It takes a year to get there roughly. And here's the analogy that I like to draw. Like there's a place on Earth that's a little bit like this called Antarctica. And there are people that live in Antarctica. We have colonized Antarctica. We have a couple bases there. There are people who live there. They rely upon huge shipments of outside materials just to live in Antarctica. It is not a nice place to live. Like it is such a struggle.
[01:38:05] It's such a major engineering challenge. And they can breathe the air there. You know, and on top of that, you have all the problems of being on the moon where you have to keep a pressurized station. You have to get all that food to the moon, which is days away. You've got to put it into space, which is expensive. It's just incredible. The incredible logistics of getting to the moon and people alive there is crazy. Mars, like it's orders of magnitude harder.
[01:38:31] Like you're a year away from your next supply route, right? And if one of these rockets gets delayed, you'd have to have another. It would be such a massive infrastructure to keep these people alive. And that's why it's so hard. And that's so we'd have to come up with a system to make this work. And that's incredibly expensive and so hard to do. Okay. So you don't think it's coming up anytime soon? Definitely not Mars.
[01:38:57] Mars, but like I know, but I know they are making all kinds of moon missions. They're starting to work on lots of moon missions. I think that's cool. I think that's interesting. How the actual moon, like having some, I could see it being a bit like a space station. Like, you know, we have a space station. We rotate people off of there every now and again, although sometimes you run into problems there, right? They were stuck on the space station for a lot longer than they thought. Toilets don't work. Yeah. Could you imagine?
[01:39:24] So I could, but I could see something space station-esque happening on the moon. But Mars, man, that's just, that's just hell. Mars is such a terrible place to go to. I don't, and there was a Mars one, there are all kinds of would-be Mars missions. People are trying to, they were trying to crowdfund one. It was called Mars One. I think I might've talked about it a long time ago with you. I'm not sure.
[01:39:50] But basically, effectively ended up being, whether by intent or not, it's kind of a scam. Let's call it a boondoggle. Boondoggle, you can't sue me over. It was a boondoggle. And it just didn't go anywhere. They tried to make a reality show out of trying to select people to go to this Mars colony. It didn't go anywhere either. But yeah, it's not anytime soon. Yeah. Okay. Steven has a question, not Fishback.
[01:40:15] Does Dr. Yubick have any thoughts on, personal thoughts on the use and implementation of generative AI in the workforce? Is he pro or against utilizing AI to automate some of the more mundane tasks in work? Aside, I am totally against using AI for art and writing, more so for data organization, operational efficiency, et cetera. Yeah. Yeah.
[01:40:40] So this is a question I have to wrestle with a lot because in my laboratory, there are some scientists who use a lot of generative AI techniques for various things. I know some people try to use it to help them write their papers. I kind of caution against that. But where do I think is the right place for it, if anywhere? I mean, as a tool, like I said, it's incredible what it can do, but what can it reliably do? And it's not very reliable technology, at least not for things you can count on.
[01:41:10] So it's for things, I would use it for things that basically are summarizing work that you've already done, like maybe reformatting it, putting it in a different format. And something where it's not high stakes, right? Like one thing I'll see people is they use it to help transcribe meetings. And if the meetings are not that earthshakingly important, totally fine. Totally fine. Like if it's better than my memory of what did we talk about that meeting? I'm not very good at taking notes.
[01:41:39] So it's better than my memory. I could see, I've been in meetings where people use that and I think that's totally fine. Low stakes, summarizing things. It's not generating really much new. It's summarizing things that are already put into it. I think that is probably the sweet spot for where it needs to be. Um, but like how important do these meetings get? Like I know that some people use these tools for medical appointments. I know people who are, uh, doctors and physician's assistants who, uh, use this, who, who have
[01:42:07] been given this technology as part of summarizing medical meetings and they are warned. Make sure you check them because it can hallucinate. And that, that scares me a little bit when it's a healthcare provider, right? I mean, I trust my doctor to check what they're, what they're reading, but like, I hope, I hope I can. Um, cause if it's doing medical notes, that's going to be forwarded on to whatever my next medical appointment is and that's part of my record.
[01:42:33] And so, so you have to, but the more, so the more important it is, the more you have to double check it. But as sort of an organizational tool to help keep you to, to, to help summarize things are already done, I think that's the best use case that I see, uh, for the, for general productivity for me. I think that there's more use cases than that. Uh, I, I find that it's really helpful in terms of like, okay, here's the, you know, here's all this information.
[01:42:58] Like, can you synthesize, uh, you know, what, you know, what I'm telling you into, uh, you know, a different format. Can you give like, what, what, what's a plan that you would recommend? Like, uh, I, I love to, you know, bounce ideas off of it. I think that's another one. I would say, I would say, so like the, the first thing you described, I kind of meant that as well, where like reformat this into a different, like bullet pointed list. Again, you might want, if it's super important, you might want to check because sometimes you can drop things.
[01:43:26] Uh, but the, uh, but like some people like to use it as a jumping off point, like for inspiration. And that's something that people are studying. I don't know what the literature is on that. Like as to how helpful it is, but I anecdotally, I, I, I was like, Hey, can you like, here's, here's something I'll do from time to time. Like I have a sentence and I'm writing this, like this sentence sucks. Why does this sentence suck? And I'll put it, I can put it into like a chat GPT. It's like, Hey, tell me what this sentence condense it. And it gives me something. I look at it. It's like, Oh, okay. I see what it's doing. I still have to be careful.
[01:43:56] Cause sometimes it will drop at a point that I think was actually important. Uh, but like, but it can, and I hope you reframe your thinking. Um, so I'm fine with that as well. I mean, if you consider that a productivity tool, which I think is totally a, totally a reasonable argument to make. Okay. Dr. Amanda, great friend of the podcast. I would love to hear Christian's review of the Christopher Nolan opus Interstellar. Oh, Interstellar. I have such a complicated relationship with that movie. Came out 10 years ago now.
[01:44:26] Wow. Crazy. Crazy. I've never seen it. Uh, you know what? It's, it's, you gotta sit down for that one. That's not a, that's not a, uh, do other tasks. A multitasker. Yeah. Oh my God. Especially not with the sound mixing. So like I, I saw that I will never forget. I was, um, on a research visit, uh, in 2014, uh, for the fall. I was, um, and my only pop culture connections were where I was watching San Juan del Sur. I was basically living out of a motel for a month in the college station, Texas.
[01:44:55] I was listening to San Juan del Sur and your guys' coverage of San Juan del Sur. And, uh, and when Interstellar came out, I, I liked seeing Christopher Nolan movies. So I'm like, I'm going to see that. And it's a space science fiction movie. And there's a lot of things I like about it. I I've told you before on a previous podcast, ask Dr. Hubecki number, uh, was, uh, I love the robot in it. There's a robot called TARS. I think it's a super cool robot. Um, I, I remember watching it and I had two really loud people next to me talking during the movie.
[01:45:24] And for those who have not noticed, Christopher Nolan loves to mix in his dialogue, a little light underneath all of the ambient noise. So it's hard to hear what he's saying. So like I missed what captions on. Oh, definitely. For me, every Christopher Nolan movie is a captions on movie. Everyone. Although I think Oppenheimer, I think he did a much better job in my opinion, like tenant. I could not watch that without, uh, uh, with, without, without, uh, subtitles. Um, a lot of cool ideas.
[01:45:52] I thought that the way at one point in the movie, uh, I'll try to keep the spoilers light Rob, because you haven't seen it. Um, they, they introduced the idea of time dilation, uh, to the public. And that's what that you probably have heard the idea. If you go fat close to the speed of light, time slows down for you. That's time dilation. Uh, they smartly don't call it time dilation. They, they have your instincts for naming things, Rob. Don't call it a depth first search. Don't want the depth first search. Oh, they, they call it time slippage. Like time is slipping away. They call it, it was a much better name than tie dilation.
[01:46:20] Cause people don't, people think of dilating eyes and they don't think of that as what, what it is, which is time is slowing down or time is slipping away from you. Um, so, and they, they have a plot point where it involves that and it's really well done. Uh, I think it's actually one of the things that stands out in the movie is being memorable. People talked about like when you're on this planet for, for, for one hour, it's actually a year. Outside the planet or something like that.
[01:46:46] Um, and I believe that Josh made a reference in the evolution of strategy to the idea of, uh, Dan Barry being off in space for hundreds of years. But anyway, that's, uh, I, I can't believe I remember that. So, um, that's what that was referenced to. Also 10 years old. Yeah. Oh my goodness. Oh my goodness. So it was a very timely reference. It was at the very time at the time. So that stuff is great. And they, I think the challenge was they also have like a black hole and a wormhole.
[01:47:15] There's a lot of scientific concepts. There's a lot of stuff in this movie and you really have to be dialed in paying attention and you can't miss anything. And then they add some other pseudo supernatural elements to it. There's just, I think at some point it becomes too many things, too many themes creep in for it to be like the beautiful movie that like just sings in my heart forever. But I admire the ambition of that movie. And I I'll never be mad at Christopher Nolan for being ambitious. I love the ambition of those kinds of movies.
[01:47:45] It's just, I think that I also, I would take a few things out. I would be the editor. I would be like, let's, let's take out this theme and see if it all still gels just a little too many things. So yeah. Okay. Great review. Oh, thank you. Thank you. I, I, so I, I, I, I do, I think you should revisit it. Um, I see what I, what, what I think again, but, um, it's definitely one of those movies you gotta be in a mood for like, really. What's the runtime? Oh, Christopher Nolan runtime is got, it's well over two hours. I wonder if it's three.
[01:48:15] It's probably never going to get my wife to watch this. Yes. It's, uh, I mean, it's got, um, Hans Zimmer. Let's go back to score here. Got a Hans Zimmer score. Gotta love a Hans Zimmer score. He, I would have one piece of feedback for a lot of these reality shows. Ron Zimmer is kind of most famous for the whole like inception boy sound. Like when, like you're listening to like, it makes that noise. Like he made that famous and everyone rips that off. And it's not interesting when it's ripped off. Don't do it.
[01:48:44] Uh, but it's actually crept into a lot of like in reality shows, even survivor will do the blast. Like just drop the west. The low brass hits. Um, you know what I'm talking about? Robin reception had this really loud. Sure. Yeah. So everyone knows it's now in trailers. Might be on Paramount+. It's probably on Paramount+. It's all over the place. It's just, it works when he does it, but not when everyone else does it. Okay.
[01:49:09] And Christian, finally, Scott asks, in what place do you feel most connected to the world? Yeah. So that was right here on Dr. Ask Dr. Hubecki. Absolutely. It's right here with you, Rob. As always, my heart will go on. I mean, I honestly, I was trying to think about what that question meant. Like what that means. I mean, people say that I understand as a phrase people say.
[01:49:34] Um, for me, I'm curious what you think when you hear that question, but, uh, uh, for me, it's, it's when. Like, I feel like I can come to some greater understanding of the world around me. Like I can actually absorb the sensory sort of inputs around me and then synthesize some new understanding that I didn't have before. What do you, when you hear that question, Rob, what do you think? Like when he means like, by like being connected to the world.
[01:50:03] I think that, uh, where you feel at peace, uh, you know, feel like that you're connected, uh, to the universe. Yeah, I guess so. Like some people call that a kind of a spiritual feeling. I, not the language that I tend to use, but I could, I could see that like at peace, I think is a part of it because that might be a place that you need to be mentally to actually come to some kind of revelatory experience or at least a, a novel, uh, uh, understanding
[01:50:32] of what the, what the world's about or your, your place in it. And it is cliche to say, but I did feel this when I was on the Island. I, I like when I felt like, oh, I'm connected to this. It's a bit cliche. I don't like cliche, but I feel like it was true. Like you're just like, you're out in the middle of the night. You have, and I have no other responsibilities other than to be here right now. I'm plugged into a game, but because you're separated from everything else you're normally worried about, it does give you an opportunity to get new perspective.
[01:51:00] I think that, that, uh, it requires an ability to absorb and take on a new perspective and say, yeah, you know, what is actually important right now? What's what they actually focus on the stuff around me as opposed to the task directly in front of me. So I think that that's, I think if I were to come up with the opposite of being connected to the world, it's like where I'm diving in onto a task, like I can feel connected to that task. I'll tell you, if I'm zoned in, like when I'm coding for blood in the clock tower, Rob, I'm like, I wanted this thing done on December 27th.
[01:51:28] I wanted this thing that I was, I was coding up a storm, red bulls chugging. I was, you know, it's like, I was dialed in. That's sort of, that's like the opposite of this experience, right? Because I am ignoring the world around me. Right. So I think that's, that's the way I kind of process that. That question is like when I'm not zeroed in on a task, I'm instead taking in as many sensory inputs as I can, or at least perceive that I am and try to come up with some new understanding of what that's telling me. That's a little vague, a little bit more woo woo than I typically get to Rob.
[01:51:58] That's, that's my interpretation of that question. Yeah. Okay. Well, I think it speaks to, you know, uh, so many of your passions that you bring to, uh, this podcast where you're able to tell us about all of these different things, whether you're talking about a sandwich or, uh, a, you know, uh, explaining a technical term or a movie or an area of robotics, the way that you, you know, speak with passion about all of
[01:52:25] these things, uh, is, uh, why is makes it so fun to listen to you. Thank you, Rob. It is, it is a joy to talk. You're such a, such a great host and interviewer of these things too. You ask the right questions and you, you push in the right directions. Uh, that, that thank you so much for that opportunity. And I appreciate it. This podcast makes it very easy to ask the right questions. I just, I'll just riff. Give me a question. I'll go. Uh, yeah, but that's here. Okay. So, uh, but this was wonderful. Of course, uh, check out, uh, the blood on the clock tower games.
[01:52:53] If you haven't seen them, we've done two so far. Another one is in the can, uh, will be, uh, coming out in the next couple of weeks. So be on the lookout for that. Christian, what else is going on for you? Oh, uh, I mean, new semester who dis, uh, the, the, like it's a, it, uh, it, this life always begins a new with a new semester. Yeah. For me, I, I, I, I have a goal for myself, Rob. Like I have been telling myself that this class that I teach every spring, this applied
[01:53:20] optimal control class, the one I'm pointing at here, I love it so much. It's my own class. I made it up and, uh, it's my own material and I want to put it out to the world on YouTube. And I, so like, I, like I, I, but I've been hoarding it for, I've been running it for years now and I have all the lectures recorded and I have not like, just like, I can't talk to Mark Raber. I can't push public on the YouTube videos. Like it's not ready yet. It's not perfect. But this year I'm going to force myself to publish my lectures for basically for college
[01:53:49] students to take, to take this course online. Uh, I I've, I've offered it online privately to universities and institutions who want to take it from beyond my own, want to take it, but I want to put it out there for, for the world. So like, that's my current sort of task aside from blood on the clock tower. That's my other YouTube video I'm working on. Wow. Okay. So, um, that's very ambitious goal. Well, thank you. I, I, so I, yeah, I just, uh, got to get it out there. You put that painting up 19 feet in the air.
[01:54:19] You could do this. That's right. I can, if I could do that, I could do it. See, that's why that's, it's like what Jeff says about Survivor. You can do this. You can do anything. If I could hang that painting and live the, tell the tale, I can release a YouTube video. Okay. Well, Christian, where can people follow what you're doing? So you can find me on most of the socials. Uh, you can find me on blue sky, uh, at Chubicki, C-H-U-B-I-C-K-I also on Instagram threads and similar text-based, uh, um, apps.
[01:54:48] You can find me there, or you can go to Christian, you make it. If you like hearing me talk, uh, there are ways you can have me come and talk to you. If you'd like, you can reach out to me through there. Okay. All right. For everybody else, uh, thank you so much for joining us. We just posted our, uh, Genevieve Survivor 47 interview on Wednesday. So be on the lookout for that. Plus we'll be back with more traders as well. So, uh, make sure you are keeping tabs on everything we have going on here at RHAP.
[01:55:15] We'd love to see your comments and your feedback here on the YouTube channel as well. Thank you so much for joining us. Take care of a good one. Bye.

