Elon Musk on Superintelligence, Mars, and What Humanity Must Do Next
Shared by Linda Smith
(0:0) We're at the very, very early stage of the intelligence big bang.
(0:03) Being a multi planet species greatly increases the probable lifespan of civilization or consciousness or intelligence, both biological and digital.
(0:13) I think we're quite close to digital superintelligence.
(0:15) If it doesn't happen this year, next year for sure.
(0:18) Please give it up for Elon Musk.
(0:36) Elon, to AI Startup School.
(0:39) We're just really, really blessed to have your presence here today.
(0:43) Thanks for having me.
(0:44) So, from SpaceX, Tesla, Neuralink, x AI, and more, was there ever a moment in your life, before all this where you felt, I have to build something great?
(0:57) And what flipped that switch for you?
(1:00) Well, I didn't originally think I would build something great.
(1:03) I wanted to try to build something useful, but, I didn't think I would build anything particularly great.
(1:09) You've said probabilistically seemed unlikely, but I wanted to at least try.
(1:15) So you're talking to a a room full of people who are all technical engineers, often, you know, some of the most eminent AI researchers coming up in the game.
(1:25) Okay.
(1:26) I I think we should I think that I think I like the term engineer better than researcher.
(1:33) I mean, I suppose if if there's some fundamental algorithmic breakthrough, it's it's a research, but otherwise, it's engineering.
(1:41) Maybe let's start way back.
(1:43) I mean, when you were this is a room full of 18 to 25 year olds.
(1:49) It skews younger because the founder set is younger and younger.
(1:53) Can you put yourself back into their shoes when, you know, you were 18, 19, you know, learning to code, even coming up with a first idea for Zip two.
(2:04) What was that like for you?
(2:06) Yeah.
(2:06) Back in '95, I I was faced with a choice of either do, you know, grad studies, PhD at Stanford in in material science, actually working on ultra capacitors for potential use in electric vehicles, essentially trying to solve the range problem for electric vehicles or try to do something in this thing that most people have never heard of called the Internet.
(2:30) And, I talked to my professor who's, Bill Nix in the material science department and, said, like, can I defer for the quarter?
(2:39) Because this will probably fail, and then I'll need to come back to college.
(2:45) And and then he said, this is probably the last conversation we'll have.
(2:50) And he was right.
(2:52) So but I I thought things would most likely fail, not that they would most likely succeed.
(2:58) And and then in '95, I wrote, basically, I think the first or close to the first maps directions, Internet, white pages, and yellow pages on the Internet.
(3:14) I just wrote I just wrote that personally.
(3:16) I didn't even use a web server.
(3:17) I just read the port directly because I couldn't afford and I couldn't couldn't afford a t one.
(3:26) Original office was on Sherman Avenue in Palo Alto.
(3:30) There was some, like, an ISP on the floor below.
(3:32) So I drilled a I drilled a hole through the floor and just ran a LAN cable directly to the ISP.
(3:40) And, yeah, my brother joined me and another cofounder, Greg Curry, who passed away.
(3:46) And we at the time, couldn't even afford a a place to stay, so we just the the office was $500 a month, so we just slept in the office and and then showered at the YMCA on Page Millen El Camino.
(4:02) And, yeah, and we I guess we ended up doing a little bit of a useful company as of two in the beginning.
(4:12) And we we we did build a lot of real really good software technology, but we we're we're somewhat captured by the legacy media companies and that Knight Ritter, New York Times, Newhurst, whatnot were investors and, customers and and also on the board.
(4:36) So they kept they kept wanting to use our software in ways that made no sense.
(4:41) So I I wanted to go direct to consumers.
(4:43) Anyways, long story, dwelling too much on zep two, but the I I really just wanna do something useful on the Internet.
(4:50) As because I I have two choices.
(4:52) Like, do a do a PhD and watch people build the Internet or help build the Internet in some small way.
(4:59) And I was like, well, I guess I can always try and fail and then go back to grad studies.
(5:05) And, you know, it ended up being, like, reasonably successful.
(5:08) Sold for, like, $300,000,000, which is a lot at the time.
(5:11) These days, that's like I think minimum impulse, but for an AI startup is like a billion dollars.
(5:17) It's like a there's so many freaking unicorns.
(5:21) It's like a herd of unicorns at this point.
(5:24) You know, unicorn is a billion dollar situation.
(5:27) There's been inflation since, so quite a bit more money, actually.
(5:30) Yeah.
(5:31) I mean, like, 9995, you could probably buy a burger burger for a nickel.
(5:35) Well, not quite, but, I mean, yeah, there has been a lot of inflation.
(5:39) But, I mean, the the hype level in AI is is is pretty intense, as you've seen.
(5:46) You know, you see, companies that are, I don't know, less than a year old getting sometimes billion dollar or multibillion dollar valuations, which I guess could could pan out and probably will pan out in some cases.
(6:02) But it it is eye watering to see some of these valuations.
(6:08) Yeah.
(6:08) What do you think?
(6:09) I mean, well, I'm pretty bullish, personally.
(6:14) I'm I'm pretty bullish, honestly.
(6:16) So I I think the people in this room are gonna create a lot of the value that, you know, a billion people in the world should be using this stuff, and, we're not even we're scratching the surface of it.
(6:28) I love the Internet story in that, even back then, you know, you are a lot like the people in this room back then in that, you know, the the heads of all the the CEOs of all the legacy media companies look to you Yeah.
(6:43) As the person who understood the Internet.
(6:45) And a lot of the world, the, you know, the corporate world, like, the world at large that does not understand what's happening with AI, they're gonna look to the people in this room for exactly that.
(6:55) This sounds like well, you know, what are some of the tangible lessons?
(6:57) It sounds like one of them is don't give up board control or be careful about having a really good lawyer.
(7:04) I guess with first my first startup, the the big the the really, the mistake was having too much, shareholder and board control from legacy media companies who then necessarily see things through the lens of legacy media and, that they'll kinda make you do things that seem sensible to them, but are but aren't really don't make sense with the new technology.
(7:29) I I know I should point out that I that I I didn't actually, at first, intend to start a company.
(7:35) I I got tried to get a job at Netscape.
(7:38) I sent my resume into Netscape.
(7:40) I and Mark Anderson knows about this.
(7:43) And but I don't think he ever saw my resume, and then nobody responded.
(7:47) So then I tried hanging out in the lobby of Netscape to see if I could, like, bump into someone, but I was, like, too shy to talk.
(7:53) I didn't talk to anyone.
(7:54) So I'm like, man, this is ridiculous.
(7:56) So I'll just write stuff for myself and see how it goes.
(7:58) So it wasn't actually from the standpoint of, like, I wanna start a company.
(8:01) I just wanna be part of building, you know, the Internet in some way.
(8:06) And and since I couldn't get a job at an Internet company, I had to start a company Internet company.
(8:11) Anyway, the yeah.
(8:13) Yeah.
(8:13) I mean, from an AI will so profoundly change the future.
(8:18) It's difficult to fathom how much.
(8:21) But, you know, at the the the economy assuming we don't things don't go awry and and, like, AI doesn't kill us all in itself, then you'll you'll see ultimately an economy that is not not 10 times more than the current economy.
(8:43) Ultimately, like, if we become, say, or whatever our future machine descendants or but mostly machine descend descendants become, like, a a cluttered ship scale to civilization or beyond, we're we're talking about an economy that is thousands of times, maybe millions of times bigger than the economy today.
(9:08) So, yeah.
(9:11) I mean, I I I I did sort of feel a bit like, you know, when I was in DC taking a lot of flack for, like, getting rid of waste and fraud, which was an interesting side quest, as side quests go.
(9:24) As Gotta get back to the main quest.
(9:26) Yeah.
(9:27) I gotta get back to the main quest here.
(9:30) So back to the main quest.
(9:33) So but I do feel, you know, a little bit like there's you know, it's like fixing the government is kinda like there's, like, say, the beach is dirty and there's, like, some needles and feces and, like, trash and you wanna clean up the beach.
(9:46) But then there's also this, like, thousand foot wall of water, which is a tsunami of AI.
(9:52) Like and, how much does cleaning the beach really matter if you got a thousand foot tsunami about to hit?
(9:58) Not that much.
(10:00) Oh, we're glad you're back on the main quest.
(10:02) It's very important.
(10:04) Yeah.
(10:04) Back to the main quest.
(10:06) Building technology, is, what I like doing.
(10:10) It's just so much noise.
(10:12) Like, this the signal noise ratio in politics is terrible.
(10:15) So I mean, I live in San Francisco, so you you don't need to tell me twice.
(10:20) Yeah.
(10:21) DC is, like, you know, kinda I guess it's all politics in DC.
(10:24) But the if you're trying to build a rocket or cars or you're trying to have software that compiles and runs reliably, then you have to be, maximally truth seeking or your software or your hardware won't work.
(10:41) Like, there's not you can't fool math like, math and physics are rigorous judges.
(10:46) So I'm used to being in, like, a maximally truth seeking environment, and and that's definitely not politics.
(10:52) So, anyway, I'm I'm good glad to be back in, you know, technology.
(10:56) I guess I'm kinda curious going back to the zip two moment.
(10:59) You had hundreds of millions of dollars or you had an exit that worth hundreds of millions of dollars.
(11:03) I I got $20,000,000.
(11:05) Right.
(11:06) Okay.
(11:06) So you solved the money problem at least.
(11:09) And you basically took it and you rolled you kept rolling with x.com, which became PayPal and Confinity.
(11:17) Yes.
(11:18) I kept the chips on the table.
(11:20) Yeah.
(11:21) So Not everyone does that.
(11:23) A lot of the people in this room will have to make that decision, actually.
(11:26) What drove you to jump back into the ring?
(11:29) Well, I I think I I felt for with with SIP two, we both like incredible technology, but never really got, used.
(11:37) You know, I think, at least from my perspective, we had better technology than, say, Yahoo or anyone else, but it was constrained by our customers.
(11:46) And so I wanted to do something that where, okay.
(11:49) We wouldn't be constrained by our customers go direct to consumer, and that's what ended up being, like, x.com, PayPal.
(11:57) Essentially, x.com merging with Confinity, which together created PayPal.
(12:02) And and then that that actually the the the sort of PayPal diaspora has it it might have created more companies than so it's more companies than probably any anything in the twenty first century.
(12:14) You know?
(12:15) So so many talented people were at the combination of of Confinity and and x.com.
(12:22) So I just I just wanted to like, I felt like we we kinda got our wings clipped somewhat with Zip two, and I was like, okay.
(12:28) What if our wings aren't clipped and we go direct to consumer?
(12:31) And that's that's what PayPal ended up being.
(12:35) But yeah.
(12:36) If if with I got that, like, $20,000,000 check for for my share of Zip two.
(12:43) At the time, I was living with in a house with four housemates, and I had, like, I don't know, $10 in the bank.
(12:51) And then the this check arrives in the mail of all places, and I was in the mail.
(12:56) And then and now then my bank balance went from 10,000 to 20,000,000 and 10,000.
(13:02) You know, like, well, okay.
(13:04) So I have to pay taxes on that and all, but then I ended up putting almost all of that into x.com.
(13:11) And as you said, like, just kinda keeping almost all the chips on the table.
(13:17) And yeah.
(13:20) And then after PayPal, I was like, well, I I was kinda curious as to why we had not sent anyone to Mars.
(13:28) And I went on the went on the NASA website to find out when we're sending people to Mars, and there was no date.
(13:33) I thought maybe it's just hard to find on the website, but in fact, there there was no real plan to send people to Mars.
(13:42) So then, you know, I've come this is such a long story, so I don't wanna take up too much time here.
(13:46) But the I think we're all, listening with rapt attention.
(13:50) So so I was actually I was on the Long Island Expressway with my friend, Adeo Resi.
(13:55) We're we're, like, housemates in college.
(13:57) And and Adeo was asking me what I'm what we're gonna do what what am I gonna do after PayPal?
(14:02) And I was like, it's like, I don't know.
(14:03) I guess maybe I'd like to do something philanthropic in space because I don't think I could actually do anything commercial in space because that seemed like the purview of nations.
(14:12) So but, you know, I'm kinda curious as to when we're gonna send people to Mars.
(14:18) And and that's when I was like, oh, it's not on the website.
(14:20) And then I started digging on not there's nothing on the mass NASA website.
(14:23) So then I started digging in, and and I'm I'm I'm definitely summarizing a lot here.
(14:30) But I I I my first idea was to do a philanthropic mission to Mars called Life to Mars, where it would send a a small greenhouse with seasoned dehydrated nutrient gel, land land that on Mars and grow you know, hydrate the gel, and then you have this this great sort of money shot of green plants on a red background.
(14:57) For the longest time, by the way, didn't realize money shot, think, is a porn reference.
(15:01) But but anyway, the point is that that would be the great shot of green plants on a red background and to try to inspire, you know, NASA and the public to to send astronauts to to Mars.
(15:14) As I learned more, I just I I came to realize oh, and along the way, by the way, I went to Russia in, like, 2001 and 2002 to buy ICBMs, which is like that's an adventure.
(15:27) You know?
(15:27) You go and meet with Russian high command and say, I'd like to buy some ICBMs.
(15:31) This was to get to space Yeah.
(15:33) As a rocket.
(15:34) Not to not to nuke anyone, but but they had they had to as a result of arms reduction talks, they had to actually destroy a bunch of their their big nuclear missiles.
(15:47) So I was like, well, how about if we take two of those, you know, minus the nuke, add an additional upper stage for for Mars?
(15:56) But it was kinda trippy, you know, being in Moscow in, well, 2001 negotiating with, like, the Russian military to buy ICBMs.
(16:05) Like, that's crazy.
(16:10) And but they they they they kept also, like, raising the price on me so that so, like, literally, it's kind of, like, the opposite of what a negotiation should should do.
(16:19) So I was like, man, these things are getting really expensive.
(16:22) And and then I I came to realize that actually the problem was not that there was insufficient world to go to Mars, but there there was no way to do so, without breaking the budget, you know, by even breaking the NASA budget.
(16:33) So that's where I decided to start SpaceX to SpaceX to, advance rocket technology to the point where we could send people to, Mars.
(16:44) And, that was in 02/2002.
(16:46) So that wasn't you know, you didn't start out wanting to start a business.
(16:51) You wanted to start just something that was interesting to you that you thought humanity needed.
(16:57) And then as you sort of, you know, like a Yeah.
(17:00) Cat pulling on, you know, a string.
(17:03) It just sort of the ball sort of unravels, and it turns out this is Yeah.
(17:07) Could be a very profitable business.
(17:10) I mean, it it is now, but it the there I I there had been no prior example of really a rocket startup succeeding.
(17:18) There have been various attempts to do commercial rocket companies, and they'd all all failed.
(17:23) So, again, with with SpaceX starting SpaceX was, really from the standpoint of, like, I I think there's, like, a less than 10% chance of being successful, maybe 1%.
(17:34) I don't know.
(17:35) But, but if if if a startup doesn't do something to advance rocket technology, it's definitely not coming from from the big defense contractors because they just impede and smash to the government, and the government just wants to do very conventional things.
(17:52) So there's it's either coming from a startup or it's not happening at all.
(17:55) So so, like, a small chance of success is better than no chance of success.
(18:00) And and so that yeah.
(18:02) So SpaceX, you know, started that in in mid mid two thousand two expecting to fail.
(18:09) I'd like I said, probably 90% chance of failing.
(18:12) And even, like, when recruiting people, I I didn't, like, try to, you know, make out that it would well, I said we're probably gonna die, but full chance we might not die.
(18:22) And if, but this is the only way to get people to Mars and advance the state of the art.
(18:28) And and then, I ended up being chief engineer of the rocket, not because I wanted to, but because I couldn't hire anyone who was good.
(18:38) So, like, none of the good sort of chief engineers would join because it's like, this is too risky.
(18:43) You're gonna die.
(18:44) And, so then I ended up being chief engineer of the rocket.
(18:48) And, you know, the first three flights did fail, so it's a bit of a learning exercise there.
(18:54) And fourth one fortunately worked.
(18:57) But if fourth one hadn't worked, I had no money left, and that would have been it would have been curtains.
(19:03) So it was a pretty close thing.
(19:04) If if the fourth launch of Falcon not work, it would have been just curtains, and we'd have just been joined the the graveyard of prior rocket startups.
(19:13) So it's it would like like, my estimate of success was not far off.
(19:18) We just we made it by the skin of our teeth.
(19:22) And Tesla was happening sort of simultaneously.
(19:26) Like, 2008 was a rough year because at mid two thousand eight or of course, summer two thousand eight, the third the third launch of SpaceX had failed, a third failure in a row.
(19:41) The Tesla financing round had failed, and so Tesla was going bankrupt fast.
(19:48) It was just it's like, man, this is grim.
(19:53) This is this is gonna be a tale of warning of an exercise in hubris.
(20:00) Probably throughout that period, a lot of people were saying, you know, Elon is a software guy.
(20:05) Why is he working on hardware?
(20:07) Why would yeah.
(20:09) Why would he choose to work on this?
(20:10) Right?
(20:10) Yeah.
(20:11) 100%.
(20:11) So you can look at the, like, the because it's still the you know, the press of that time is still online.
(20:16) You can just search it.
(20:17) And and they they kept calling me Internet guy.
(20:21) So, like, Internet guy, aka fool, is attempting to build a rocket company.
(20:29) So, you know, that we got ridiculed quite a lot.
(20:37) And it it does sound pretty absurd, like Internet guy starts rocket company.
(20:41) It doesn't sound like a recipe for success, frankly.
(20:45) So I didn't hold it against him.
(20:46) I was like, yeah.
(20:47) You know, admittedly, it does sound improbable, and I agree that it's improbable.
(20:53) But, fortunately, the fourth launch worked, and and and and NASA awarded us a a a contract to resupply the space station.
(21:03) And I think that was, like, maybe, I don't know, December 22 or it was it was, like, right before Christmas.
(21:13) Because even the fourth launch working wasn't enough to succeed.
(21:16) NASA also needed we also needed a big contract to keep us alive.
(21:19) So so I got I got that call from, like, the the the NASA NASA team, and I literally they said, we're we're awarding you one of the contracts to resupply the space station.
(21:31) I, like, literally blurted out, I love you guys, which is not normally, you know, what they hear, because it's usually pretty, you know, sober.
(21:41) But I was like, man, this is a company saver.
(21:43) And then, we we closed the the Tesla financing round on the last hour of the last day that it was possible, which was 6PM, 12/24/2008.
(21:55) We would have bounced payroll two days after Christmas if that round hadn't hadn't closed.
(21:59) So that was a nerve racking end of 02/2008.
(22:01) That's for sure.
(22:03) I guess from your PayPal and Zip two experience jumping into these hardcore hardware startups, you it feels like one of the through lines was being able to find and eventually attract the smartest possible people in those particular fields.
(22:19) You know, what what would I mean, the people in this room, like, some of the most of the people here, I don't think have even managed a single person yet.
(22:25) They're just starting their careers.
(22:27) What would you tell to, you know, the Elon who's never had to do that yet?
(22:33) I I generally think to try to try to be as useful as possible.
(22:36) It's it may sound trite, but it's it's so hard to be useful, especially to be useful to a lot of people.
(22:44) We say the area under the curve of total utility is like, how how useful have you been to your fellow human beings times how many people?
(22:51) It like, it's almost like like the physics definition of true work.
(22:54) It's incredibly difficult to do that.
(22:56) And I think if you aspire to do true work, you're you're you're probably success is much higher.
(23:04) Like like, don't aspire to glory.
(23:06) Aspire to work.
(23:07) How can you tell that it's true work?
(23:09) Like, is it external?
(23:11) Is it like what happens with other people or, you know, what the product does for people?
(23:15) Like, what, you know, what is that for you?
(23:18) When you're looking for people to come work for you, like, what, you know, what's the salient thing that you look for?
(23:22) Or if they're Well, that's a different question.
(23:24) Guess it's, I mean, if in terms of your of of your end product, you just have to say, like, well, if this thing is successful, how useful will it be to how many people?
(23:33) And that that's that's what I mean.
(23:35) And and then you do you do whatever, you know, whether you're CEO or or any role in a startup, you do whatever it takes to succeed.
(23:42) Like and and just and just always be smash smashing your ego, like like, internalize responsibility.
(23:51) Like, a major failure mode is when ego to ability ratio is double greater than sign one.
(23:59) You know?
(24:00) Like, if you if your ego to ability ratio is it gets too high, then you're you're you're you're gonna basically break the feedback loop to reality.
(24:08) And in in AI terms, your your arrow you'll have you'll you'll break your RL loop.
(24:15) So you want you wanna you don't wanna break your you wanna have a strong RL loop, means internalizing responsibility and minimizing ego, and you do whatever the task is no matter whether it's, you know, grand or humble.
(24:26) So, I mean, that's kinda like why I actually I prefer the term, like, engineering as opposed to research.
(24:34) I prefer the term and and I don't I actually don't want to call XAI a lab.
(24:39) I just wanna be a company.
(24:42) I I like it's like whatever the whatever the simplest most straightforward, ideally lowest ego terms are, those are generally a good way to go.
(24:54) Yeah.
(24:55) To you want you wanna just close the loop on reality hard.
(24:59) That's that's a that's a super big deal.
(25:01) I think everyone in this room is, really looks up to everything you've done around being sort of a paragon of first principles.
(25:09) And, you know, thinking about the stuff you've done, how do you actually determine your reality?
(25:16) Because that seems like a pretty big part of it.
(25:19) Like, other people, people who have never made anything, non engineers, sometimes journalists at time who've never done anything, like, they will criticize you.
(25:28) But then, clearly, you have another set of people who are builders, who have very high, you know, sort of area under the curve, who are in your circle.
(25:38) Like, you know, should people approach that?
(25:41) Like, what has worked for you, and what would you pass on, like, you know, to to x, to your children?
(25:47) Like, you know, what do you tell them when you're like, you need to make your way in this world.
(25:52) Here you know, here's how to construct a reality that is predictive from first principles.
(25:57) Well, the the tools of physics are incredibly helpful to to understand and make progress in any field.
(26:07) And first principles means just obviously, just means, you know, breakings down to the fundamental axiomatic elements that are most likely to be true and then reason up from there as cogently as possible as opposed to reasoning by analysis or metaphor.
(26:22) And then it just some of the things like like thinking in the limit.
(26:25) Like, if you extrapolate, you know, minimize this thing or maximize that thing, thinking in the limit is is very, very helpful.
(26:32) I use all the tools of physics.
(26:35) They they they apply to any field.
(26:39) This is like a superpower, actually.
(26:43) So you can take say, if you take for example, like, rockets, you could say, well, how how much should a rocket rocket cost?
(26:50) A typical approach to how to that people would take how much rockets should cost there is they would look historically at what the cost of rockets are and assume that any new rocket must be somewhat similar to the prior cost of rockets.
(27:01) A first principles approach would be you you look at the materials that the rocket is comprised of.
(27:06) So if that's aluminum, copper, carbon fiber, steel, whatever the case may be, and say what what how much does that rocket weigh and and and what are the constituent elements and how much do they weigh?
(27:19) What is the material price per kilogram of those constituent elements?
(27:23) And that sets the actual floor on what a rocket can cost.
(27:27) It's it can asymptotically approach the cost of the raw materials.
(27:31) And then you realize, oh, actually, a rocket the raw materials of a rocket are only maybe one or 2% of of the historical cost of a rocket.
(27:40) So the manufacturing must necessarily be very inefficient, if the if if the raw material cost is only one or 2%.
(27:49) That'll be a first first principles analysis of the potential for the for cost optimization of a rocket.
(27:56) And that's before you get to reusability.
(27:58) You know, to give an AI sort of AI example, I guess, last year, we're for XAI when we were trying to build a training super cluster, we we we went to the various suppliers to ask said this is beginning of last year, that we needed a 100,000 h one hundreds to be able to train coherently.
(28:20) And their estimates for how long it would take to complete that were eighteen to twenty four months.
(28:26) I was like, well, we need to get that done in six months.
(28:29) So then or we won't be competitive.
(28:34) So so then, if you break that down, what what what are the things you need?
(28:38) Well, you need a building.
(28:39) You need power.
(28:40) You need cooling.
(28:43) We didn't have enough time to build a building from scratch, so we've had to find an existing building.
(28:47) So we found a factory that was no longer in use in Memphis that used to build Electrolux products.
(28:55) But then the the input power was 15 megawatts, and we needed a 150 megawatts.
(29:00) So we we rented generators and had generators on one side of the building, and then we have to have cooling.
(29:06) So we rented about a quarter of the mobile cooling capacity of The US and put the chillers on the other side of the building.
(29:12) That didn't fully solve the problem because the voltage vary the the power variations during training are are very very big.
(29:20) So you can have power can drop by 50% in a hundred milliseconds, which the generators can't keep up with.
(29:26) So then we come we added Tesla Megapacks and modified the software in the Megapacks to be able to to to smooth out the the power variation during the training run.
(29:37) And then there were there were a bunch of networking challenges, because the networking cables, if you're trying to make a 100,000 GPUs trained coherently, a very, very challenging Almost it sounds like, almost any of those things you mentioned, I could imagine someone telling you very directly, no.
(29:55) You can't have that.
(29:56) You can't have that power.
(29:57) You can't have this.
(29:59) And it sounds like one of the salient pieces of first principles thinking is actually, let's ask why.
(30:04) Let's, you know, figure that out, and actually, let's challenge the person across the table.
(30:10) And if they if I don't get an answer that I feel good about, I'm gonna, you know, not allow that to be I'm not gonna let that no to stand.
(30:18) Is that I mean, that feels like something that, you know, everyone if someone were to try to do what you're doing in hardware, hardware seems to uniquely need this.
(30:27) In software, we have lots of, you know, fluff and things that, you know, it's like we can add more CPUs for that.
(30:34) It'll be fine.
(30:35) But in hardware, it's it's just not gonna work.
(30:37) I think these general principles of first principle thinking apply to software and hardware apply to anything really.
(30:44) I'm just using kind of a hardware example of of how we were told something is impossible, but once we broke it down into the constituent elements of we need a building, we need power, we need cooling, we need we we need power smoothing, and then and then we could solve those constituent elements.
(31:03) But it it would and and then we and then we just ran the the networking operation to to do all the cabling, everything in four shifts twenty four seven.
(31:12) And and I was, like, sleeping in the data center and also doing cabling myself.
(31:18) And and there were a lot of other issues to solve.
(31:21) You know, no nobody had done a training run with a 100,000 h one hundreds training coherently last year.
(31:29) May maybe it's been done this year.
(31:30) I don't know.
(31:30) But and then and then we ended up doubling that to 200,000.
(31:34) And so now we've we've got 150,000 h one hundreds, 50 k h two hundreds, and 30 k g v two hundreds, in the in the Memphis, training center.
(31:46) And we're about to bring a 110,000 g v two hundreds online, at a second data center also in the Memphis area.
(31:54) Is it your view that, you know, pretraining is still working and, you know, larger the scaling laws still hold, and whoever wins this race will have basically the biggest, smartest possible model that you could distill?
(32:09) Well, there's other various elements that, side competitiveness for for large AI.
(32:18) This this, for sure, the the talent of the people matter.
(32:23) The scale of the hardware matters and how well you're able to bring that hardware to bear.
(32:27) So you can't just order a whole bunch of GPUs and they they don't then you can't just plug them in.
(32:33) So you've you've gotta you've gotta get a lot of GPUs and have them train train coherently and stably.
(32:41) Then it's like what unique access to data do you have?
(32:44) I guess distribution matters to some degree as well.
(32:47) Like, how do people get exposed to your AI?
(32:49) Those those are those are critical factors for if it's gonna be, like, a large foundation model that's competitive.
(32:57) You know, as as many have said, I think my friend, Ilya Setskaya, has said, you know, we we've kind of run out of pre training data of human generated pre like, human generated data, you run out of tokens pretty fast of certainly of high quality tokens.
(33:15) And and then you and you have to do a lot of you you you need to essentially create synthetic data and and be able to accurately judge the synthetic data that you're creating to verify.
(33:30) Like, is this real synthetic data, or is it an hallucination that doesn't actually match reality?
(33:38) So achieving grounding in reality is is is tricky, but but but we are we are at the stage where there's more effort put into synthetic data.
(33:49) And, like, right now, we're we're training Grok 3.5, which is a a heavy focus on reasoning.
(33:55) Going back to your physics point, what I heard for reasoning is that, hard science, particularly physics textbooks, are very useful for reasoning?
(34:03) Whereas, I think researchers have told me that, social science is totally useless for reasoning.
(34:11) Yes.
(34:12) That's probably true.
(34:14) So yeah.
(34:18) There's something that's gonna be very important in the future is combining deep AI in the the data center or super cluster with robotics.
(34:31) So that's, you know, things like like the Optimus humanoid robot.
(34:35) Incredible.
(34:36) Yeah.
(34:37) Optimus is awesome.
(34:39) There's gonna be so many humanoid robots and and robots of all robots of all sizes and shapes, but my my prediction is that there will be more humanoid robots by far than all other robots combined by maybe an order of magnitude, like a a big difference.
(34:56) And Is it true that you're you're planning a robot army of a sort?
(35:01) Whether we do it or or or, you know, whether Tesla does it, you know, Tesla works closely with x AI.
(35:09) Like, you've you've seen how many humanoid robot startups are there.
(35:12) Like, it's like I think Jensen Bong was on stage with a lot with a massive number of robots, you know, robots from different companies.
(35:21) I think it was, like, thousand different humanoid robots.
(35:26) So, I mean, I guess, know, part of what I've been fighting and maybe what has slowed me down somewhat is that I'm a I'm a little I don't want I don't wanna make Terminator real.
(35:35) You know?
(35:37) So I've been sort of, I guess, at least until recent years dragging my feet on on AI and rope and humanoid robotics.
(35:47) And then I sort of come to realize that realization it's it's happening whether I do it or not.
(35:51) So you got really two choices.
(35:54) Particip you you could either be a spectator or a participant.
(35:57) It's like, well, I guess I'd rather be a a participant than a spectator.
(36:02) So now it's, you know, pedal to the metal on humanoid robots and digital superintelligence.
(36:09) So I guess, you know, there's a third thing that, everyone has heard you talk a lot about that I'm really a big fan of, you know, becoming a multiplanetary species.
(36:18) Where does this fit?
(36:19) You know, this is all, you know, not not just a ten or twenty year thing, maybe a hundred year thing.
(36:23) Like, it's a multi you know, many, many generations for humanity kind of thing.
(36:28) You know, how do you think about it?
(36:31) There's, you know, AI, obviously.
(36:32) There's embodied robotics, and then there's being a multi plan multiplanetary species.
(36:38) Does everything sort of feed into that last point?
(36:41) Or, you know, what what are you driven by right now for the next ten, twenty, and a hundred years?
(36:46) Jeez.
(36:46) A hundred years?
(36:47) Man, I hope civilization's around in a hundred years.
(36:50) If if it is around, it's gonna look very different from civilization today.
(36:56) I mean, I'd predict that there's gonna be at least five times as many humanoid robots as there are humans, maybe 10 times.
(37:08) And one way to look at the progress of civilization is percentage completion Kardashev.
(37:13) So if you're in a Kardashev scale one, you've you've harnessed all the energy of a planet.
(37:19) In in my in my opinion, we're we're we're only harnessed maybe one or 2% of Earth's energy.
(37:27) So we got a long way to go to be Kotcheb scale one.
(37:31) Then Kardashev was Kardashev two, you've harnessed all the energy of a sun, which would be, I don't know, a billion times more energy than Earth, maybe closer to a trillion.
(37:43) And then Kardashev three would be all the energy of a galaxy.
(37:47) Pretty far from that.
(37:49) So we're we're at the very, very early stage of the intelligence big bang.
(37:55) I I I hope I hope we're on the in in terms of being multiplanetary, like, I think I think we'll have enough mass transferred to Mars within, like, roughly thirty years to make Mars self sustaining such that Mars can continue to grow and prosper even if the resupply ships from Earth stop coming.
(38:15) And that that greatly increases the probable lifespan of civilization or or consciousness or intelligence, both biological and digital.
(38:28) So that's why I think it's it's important to become a multi planet species.
(38:30) And I'm somewhat troubled by the Fermi paradox, like, why have we not seen any aliens?
(38:36) And it could be because intelligence is incredibly rare, and maybe we're the only ones in this galaxy, In which case, the intelligence of consciousness is just that tiny candle in a vast darkness, and we should do everything possible to ensure the tiny candles candle does not go out.
(38:56) And being a multi planet species or making consciousness multi planetary greatly improves the probable lifespan of civilization.
(39:05) And it's an it's it's the next step before going to other star systems.
(39:10) Once you once you at least have two planets, then you've got a forcing function for the improvement of space travel, and and that that ultimately is what will lead to consciousness expanding to the stars.
(39:24) It could be that, the Fermi Paradox dictates once you get to some level of technology, you destroy yourself.
(39:31) How do we say ourselves?
(39:32) How do we actually what would you prescribe to, I mean, a room full of engineers?
(39:38) Like, what can we do to prevent that from happening?
(39:41) Yeah.
(39:41) How do we avoid the great filters?
(39:43) One of the great filters would obviously be global thermonuclear war.
(39:48) So we we should try to avoid that.
(39:51) I guess, building benign AI robots that AI that loves humanity and, you know, robots that are helpful.
(40:06) So something that I think is extremely important in building AI is is a very rigorous adherence to truth even if that truth is politically incorrect.
(40:18) My intuition for what could make AI very dangerous is if if you force AI to believe things that are not true.
(40:27) How do you think about, you know, there's sort of this argument for open open for safety versus closed for competitive edge?
(40:34) I mean, I think the great thing is you have a competitive model.
(40:37) Many other people also have competitive models.
(40:40) And in that sense, you know, we're sort of off of maybe the worst timeline that I'd be worried about is, you know, there's fast takeoff and it's only in one person's hands.
(40:48) You know, that might, you know, sort of collapse, a lot of things.
(40:53) Whereas now we have choice, which is great.
(40:55) How do you think about this?
(40:58) I do think there will be several deep intelligences.
(41:02) May maybe at least five, maybe as much as 10.
(41:12) I'm not sure that there's gonna be hundreds, but it's probably close to like, maybe it'll be, like, 10 or something like that, of which maybe four will be in The US.
(41:25) So I I don't think it's gonna be any one AI that that has a runaway capability.
(41:37) But but yeah.
(41:38) So several deep intelligences.
(41:40) What will these deep deep intelligences actually be doing?
(41:44) Will it be scientific research or trying to hack each other?
(41:48) Probably all of the above.
(41:52) I mean, hopefully, they will discover new physics, and I think they will very they're they're definitely going to invent new technologies.
(42:04) Like, I mean, I think we're quite close to digital superintelligence.
(42:08) It may happen this year, and if it doesn't happen this year, next year for sure.
(42:13) A digital superintelligence defined as smarter than any human at anything.
(42:19) Well, so how do we direct that to sort of super abundance?
(42:22) You know, we have we could have robotic labor.
(42:25) We have cheap energy intelligence on demand.
(42:28) You know, is that sort of the white pill?
(42:30) Like, where do you sit on the spectrum?
(42:33) And are there tangible things that you would encourage everyone here to be working on to make that white pill actually a reality?
(42:42) I think it I think it most likely will be a good outcome.
(42:47) I I guess I'd sort of agree with Jeff Hinton that maybe it's a ten to twenty percent chance of annihilation.
(42:53) But look on the bright side, that's eighty to ninety percent probably of a great outcome.
(42:59) So, yeah, I can't emphasize this enough.
(43:05) A rigorous adherence to truth is is the most important thing for AI safety.
(43:13) And, obviously, empathy for, humanity and life as we know it.
(43:18) We haven't, talked about Neuralink and, at all yet, but I'm curious, you know, you're working on closing the input and output gap between humans and machines.
(43:29) How critical is that to AGI, ASI?
(43:32) And, you know, once that link is made, can we not only read but also write?
(43:38) The Neuralink is not necessary to solve digital superintelligence.
(43:43) That'll happen before Neuralink is at scale.
(43:47) But what what Neuralink can effectively do is solve the the input output bandwidth constraints, especially our output bandwidth is very low.
(43:58) The the out the the sustained output of a human over the course of a day is less than one bit per second.
(44:07) So it's, you know, eighty six thousand four hundred seconds in a day, and it's extremely rare for a human to output more than that number of symbols per day.
(44:15) So so we have several days in a row.
(44:19) So you you really with with a with a Neuralink interface, you can massively increase your output bandwidth and your input bandwidth, input being right to you you have to do right operations to the brain.
(44:35) We we have now five humans who have received the the the kind of the read input where it's it's reading signals, and you've got people with with ALS who really have they're they're tetraplegics, but they they can now communicate at with similar bandwidth to a human with a fully functioning body and control their computer and phone, which is pretty cool.
(45:02) And then I think in the next six to twelve months, we'll be doing our first implants for vision where even if somebody's completely blind, we can we can write directly to the the visual cortex.
(45:19) And and we've had that working in monkeys.
(45:22) I think one of our monkeys now has had a visual implant for three years.
(45:27) And at first, it'll be relatively fairly low resolution, but long term, you would have very high resolution and be able to see multispectral wavelengths.
(45:37) So you could see an infrared ultraviolet radar.
(45:42) It's like a superpower situation.
(45:44) But, like, at at at some point, the cybernetic implants would would not simply be correcting things that went wrong, but augmenting human capabilities dramatically.
(45:55) Augmenting augmenting intelligence and senses and bandwidth dramatically.
(46:00) And that's that's gonna happen at some point.
(46:06) But digital superintelligence will happen well before that.
(46:10) At least if we have a a neural link, we might we'll be able to appreciate the the AI better.
(46:18) I guess one of the limiting reagents to all of your efforts across all of these different domains is access to the smartest possible people.
(46:27) Yes.
(46:28) But, you know, sort of simultaneous to that, we have, you know, the rocks can talk and reason and, you know, there may be 130 IQ now and they're probably gonna be super intelligent soon.
(46:39) Yeah.
(46:40) How do you reconcile those two things?
(46:42) Like, what's gonna happen in, you know, five, ten years, and what should the people in this room do to, make sure that, you know, they're the ones who are creating instead of maybe below the API line?
(46:53) Well, they they called the singularity for a reason because we don't know what's gonna happen.
(46:57) In in the not that far future, the percentage of intelligence that is human will be quite small.
(47:04) At some point, the collective sum of human intelligence will be less than 1% of all intelligence.
(47:13) And if if if things get to a clauder ship level two, we're we're talking about human intelligence, even assuming a significant increase in human population and intelligence augmentation, like massive intelligence augmentation where, like, everyone has an IQ of a thousand type of thing.
(47:34) Even in that circumstance, collective human intelligence will be probably one billionth that of, digital intelligence.
(47:41) Anyway, where where is the biological boot bootloader for digital superintelligence?
(47:46) I guess just to end off.
(47:50) Was that it was like, was that a good bootloader?
(47:52) Where do we go?
(47:53) How do we go from here?
(47:54) I mean I mean, all of this is pretty wild sci fi stuff that also could be built by the people in this room.
(48:03) You know, if you do you have a closing thought for the smartest technical people of this generation right now?
(48:10) What should they be doing?
(48:11) What should they what should they be working on?
(48:13) What should they be thinking about, you know, tonight as they go to dinner?
(48:17) Well, I I as I started off with, I think if you're doing something useful, that's great.
(48:26) If you just just try to be as useful as possible to your fellow human beings and that that then you're doing something good.
(48:34) I keep harping on this, like, focus on super truthful AI.
(48:37) That that's the most important thing for AI safety.
(48:43) You know, obviously, you know, anyone's interested in working at xAI, I mean, please please let us know.
(48:51) We're to make Grok the the maximally truth seeking AI, and I think that's a very important thing.
(49:01) Hopefully, we can understand the nature of the universe.
(49:03) That that's really, I guess, what AI can hopefully tell us.
(49:07) Maybe AI AI can maybe tell us where are the aliens and what you know, how did the universe really start?
(49:14) How will it end?
(49:15) What are the questions that we don't know that we should ask?
(49:19) And are we in a simulation?
(49:25) Or what level of simulation are we in?
(49:28) Well, I think we're gonna find out.
(49:30) By an MPC?
(49:33) Elon, thank you so much for joining us.
(49:35) Everyone, please give it up for Elon Musk.
(49:39) Okay.
(49:40) Bye.
Export
ChatGPT
Summarize and chat with this transcript
Translate
Translate this transcript to 134+ languages