Web3 Galaxy Brain 🌌🧠

Subscribe
Web3 Galaxy Brain

Ethereum's Roadmap with Domothy

6 July 2023

Summary

Show more

Transcript

Nicholas: Welcome to Web3 Galaxy Brain. My name is Nicholas. Each week, I sit down with some of the brightest people building Web3 to talk about what they're working on right now. My guest today is Dom, who goes by Domothy online. Dom works on the research team at the Ethereum Foundation. He's passionate about Ethereum development and tries to keep tabs on all parts of the roadmap and communicates these details between the many teams at the EF. This conversation comes in three parts. In the introduction, Dom and I discuss how he found his way from passionate bystander to working for the EF himself. In the second section, we discuss the hottest developments in the six areas of the Ethereum roadmap. The merge, the surge, the scourge, the verge, the purge, and the splurge. We cover whether or not state expiry should concern on-chain NFT devs, whether the price of L1 gas will go up or down over time, and much more. In the final chapter, we discuss the political and social organization of Ethereum development in and outside of the EF to try to determine if there are any lessons we can extract for DAOs and other open-source blockchain development initiatives. It's always exciting getting to peek inside the world of the EF and core protocol development, which is producing the most exciting specs at the intersection of cryptography and humanity. This episode of Web3 Galaxy Brain is sponsored by CropTop. CropTop is a decentralized blogging app built on Ethereum and IPFS. If you own a .eth name, you can use it as your own decentralized personal site served peer-to-peer over IPFS with content fees and revenue streams like NFT minting baked right in. It's all accessible right now from the vanilla browser. No emails, passwords, or wallets required. The easiest way to understand what it is is to just take a look. Get started at croptop.eth.limo today. My thanks to CropTop for sponsoring this episode of Web3 Galaxy Brain. If you'd like to sponsor an episode, visit web3galaxybrain.com for details. It's as simple as minting an NFT. As always, this show is provided for entertainment and education purposes only and does not constitute financial advice or any form of endorsement or suggestion. Crypto is risky, and you alone are responsible for doing your research and making your own decisions. And now, I hope you enjoy the show. Dom, thank you for coming onto the show today. I'm excited about this conversation.

Yeah, I'm excited too.

Thanks for having me on.

Nicholas: This is your third podcast, is that right?

Yeah, officially.

Domothy: This is my first live Twitter space, but officially third podcast.

Nicholas: Wow. That's great. I have so many questions to ask you. Awesome. Okay, yeah, I'm super excited about this episode. I'm super excited to talk to you. It was great running into you just by chance the other night. It's always exciting to meet someone from Ethereum Foundation. And you're in charge of the roadmap, is that right?

Domothy: I wouldn't say I'm in charge per se, but I know a lot about the roadmap, and I love educating people on it, and answering questions, and just clarifying stuff.

Nicholas: Right. I guess the way Ethereum development works, and EF especially, there's not really someone in charge necessarily, but collaboration towards some common goals.

Domothy: Yeah.

Nicholas: So I'm curious, what were you doing before you were working at the EF?

Domothy: I was a software development way outside of crypto. Basically, I had this whole remote job since COVID. And then I would procrastinate my work by being on Twitter and learning about Ethereum, until I decided that I knew too much about Ethereum, and I had to do something about it. And I tried to find a job inside the space, which made me end up at the Ethereum Foundation.

Nicholas: Awesome. And would you say for other people who think that's really cool or exciting, is that a path that's not easy to do, but is it open to anybody, or were there specific qualifications that made it possible for you to do so, or was it mostly your enthusiasm that got you there?

Domothy: A lot of people at the EF are what we call people who just show up. So yeah, it is definitely possible if you have the right attitude. But of course, it's not like formal credentials, as in you need to have a specific degree, but it's more like you have to be knowledgeable and have the technical skills, especially when you're at the more technical side of things. But this industry is growing really fast, and there's a lot of jobs for everyone. But this is just my own personal experiences that I did kind of just show up and got the job, which is pretty cool.

Nicholas: It is pretty cool. Sounds a lot like crypto in general.

Domothy: Yeah, it's pretty laid back.

Nicholas: Yeah. Although pretty serious issues too, which we're going to get into. So how long have you been there?

Domothy: Since September of last year. So just like the week right after the merge. That's how that's my estimate for the time frame. Time is weird now that we do crypto traveling everywhere and it's hard to keep track.

Nicholas: Totally.

Domothy: Since the merge.

Nicholas: Got it. So about a year. Are you in a certain unit of the Ethereum Foundation? I know it's quite a flat organization, but how are you organized? Do you have responsibilities reporting to a certain group or how is it organized internally vis-a-vis your job?

Domothy: Officially, I'm with the research teams, but it's still pretty... There's about 15 to 20 people officially in the consensus R&D department. That's basically where I'm at. But other than that, there's a bunch of other... There's a rig team and the cryptography team and security team and the guest team. But we're mostly like... I'm in this little silo with the research people.

Nicholas: Got it. So there's like a half dozen teams or something?

Domothy: Something like that. I'm still getting a grasp of the whole organization structure.

Nicholas: So it doesn't operate like a divisional corporation. Vitalik's not like the CEO, but he does seem to know what everything is going on. But there's so much detail in every area. Are there individuals who know the whole thing? Or how do people relate to the breadth of work that's going on?

Domothy: People kind of just choose what interests them to work on. There's a lot of freedom involved. There's definitely not any kind of power structure where Vitalik tells us what to do, but there's a lot of freedom in choosing what interests us. And then people work on specific problems on the roadmap. I like to be more of the generalist that knows a lot about a lot of things, but not necessarily way in-depth the same way some of my colleagues do when they work on huge papers, very in-depth about a specific thing. I like to be more knowledgeable in general.

Nicholas: Got it. So I just have a few questions about EF in general, and then we'll dive into the roadmap specifically. And then I'd like to close with talking about some lessons that maybe DAOs could learn from Ethereum Foundation and how development happens on Ethereum. But just a few more questions on EF. Do people move between EF and specific other orgs? Like, I don't know, say the military and the military-industrial complex as a parallel? Are people moving between EF and other specific organizations or people come from all over and end up all over too?

Domothy: I really couldn't say there's a lot of people from all over, but as far as actual transfers or people pivoting, I don't really know of specific things, so I can't really answer your questions. Okay, that's okay.

Nicholas: And I know we talked a little bit about this, but you're really focused at the protocol level and not as interested or aware of what's going on at the app layer. Is that generally true of your colleagues, would you say?

Domothy: Kind of. In the consensus team, we're a lot on the beacon chain specs and the whole roadmap, so it's very core infrastructure. Of course, there's an interest in all things Web3 and application layer, but not as deeply as the protocol layer with all the problems to solve at the core layer.

Nicholas: Right. Is it hard to find people who are willing to pay attention to the details of what you're trying to discuss? I feel like most of the discussion happens around app layer stuff, not so much around the protocol, actually.

Domothy: There's a lot of people interested in the protocol layer, but I can't really say if it's hard or easy to find them. They kind of just gravitate towards the circle. If you're on Twitter, you see a lot of the same names pop up, so after a while, you get a good grasp of who's who. I'm not really in charge of finding people. I'm the one who got found, so I don't really know how easy or hard it is.

Nicholas: Does the research team pay attention to what's happening on other blockchains, or is it very much like we have our own problems and we're looking to solve those, or do people seek inspiration from other activity in the crypto space?

Domothy: Yeah, if you look at the Eat Research post, the post you see on the open forum, it's like eatresearch.ch, that's their website. A lot of the times there's lessons brought up from other blockchains or nothing really pops to mind immediately. But I know from Vitalik's writing, sometimes he's very open about other blockchains and things that are done differently and that maybe we can adapt or see what's up.

Nicholas: So turning to the roadmaps, how was this famous roadmap? Is there a name for the roadmap with the merge, the surge, etc.? ? Is there a name for that whole thing?

Domothy: It's just the roadmap. The whole diagram, yeah.

Nicholas: And where did that come from?

Domothy: Well, that was from Vitalik's December 2021, I think was the first version I've seen of it. So there's been like prior iterations of trying to put it all on the diagram. And that was the thing that made it click for me that I knew too much about all these weird little spots on the roadmap, all these exotic acronyms. And I wanted to explain it all as best as I could, which is my pin tweet right now with my living work of trying to put all the links in one place to have the whole diagram be a bit easier to parse if you're not as deeply knowledgeable as some of us are that we spend too much time into this protocol things.

Nicholas: Yeah, that HackMD page you've created is really great. And check out Domothy in tweet if you're curious and want to take a look. So yeah, do you think it makes sense to go through and just discuss all of what's in the high level, what some of the interesting points are from the different areas of development from the roadmap? Does that seem feasible to you?

Domothy: Yeah, it's definitely feasible. Just a quick overview. Just make sure you have proper time management because we can get lost on any of these little boxes on the diagram.

Nicholas: Okay, I think that'll be okay, even if we do end up getting lost. And we have three or four questions people have submitted in advance that we'll make sure to get to. Maybe before we jump in, can you give an explanation of what consensus and execution layer and any other equivalently important term? Maybe also vertical trees I feel is going to come up a lot and we should probably give a little definition of that. But can you explain the consensus and the execution layer and where we're at right now with respect to the development of how the blockchain actually functions?

Domothy: So the consensus layer as in the beacon chain that's officially been merged with the execution layer. I'm not sure if you want me to define each layer.

Nicholas: Yeah, go for it because I feel these are terms people hear but maybe we're not so clear on what they mean or used to mean.

Domothy: So the consensus layer is basically having all the nodes agree on the order of transactions, on the order of blocks. And execution layer is where the transactions are actually processed. So if you think of Ethereum as one big computer, that's why we get the nickname the world computer. One transaction is going to affect the state of this computer that's shared by everyone. So like input, processing, output. This is one transaction and the consensus layer is where they're all put in order such that it's very expensive to a point where it's nearly impossible to change the order of these transactions so that everyone gets the same coherent state at all times.

Nicholas: Got it. And that's been merged with the execution layer?

Domothy: Yeah. So before the merge, all there was was the execution layer, the monolithic and the part of consensus was the proof of work where the mining was the thing that would dictate the order of blocks. So now instead of being computationally expensive to change the order of blocks by having more hash power and doing like a 51% attack to reorg, now the equivalent is the proof of stake layer. So you have to have a lot of collateral and be willing to lose it if you want to mess with the system. So it's a different consensus mechanism. And that was a big event for Ethereum was the merge. And now it's done and gone and it's in the past. And it's pretty interesting how seamlessly that happened.

Nicholas: Yeah, extremely seamless. I mean, there was no problem at all, at least from the perspective of people using it. Got it. So how do we refer to the constituent pieces of Ethereum validation now? Do you say execution and consensus layer still?

Domothy: Yeah, there's the execution clients and the consensus clients. And they more or less work independently except for some updates like withdrawals needed to be coordinated across the layers. But other than that, it's become more modular. So you get two components that work on one thing, like getting things in order. And the execution layer is just processing transactions. So it's more or less independent, even though they work together to do this thing that we call Ethereum. So there's just Ethereum basically.

Nicholas: So if I'm a home staker, if I have my own node, am I running both pieces of software?

Domothy: Yeah, if you're a sole staker, you're running both pieces of software. because if you could only run a consensus client, then you would know things such as like this many validators voted for that block and the chain has been finalized so that these blocks are in order and everything. But your node actually knows if blocks contain invalid transactions. So that's the job of the execution clients. Before voting for a block, your consensus client is going to ask the execution client, is this block valid? Are the transactions legitimate? Are all the rules being followed? And then if it's yes, then your consensus client is going to vote for that block to have everyone agree that the blocks are valid and they're in this specific order.

Nicholas: Got it. So all of the work that we're talking about now fits within the merge subsection of the roadmap? Yeah.

Domothy: So I don't know if you're looking at the diagram, but that was the version from last year when Vitalik before the withdrawals, but after the merge. And now I see this line has been pushed more toward the future with withdrawals done and there's still a pretty big thing coming up which is single slot finality, which is to enhance the economic properties of this beacon chain. So to make it even more expensive to reorg blocks that aren't yet finalized by having finality happen every single slot. But there's a lot of stuff too that has to be put to work before we get single slot finality.

Nicholas: So maybe what teams are working on the many challenges in the path to single slot finality?

Domothy: The consensus team, I know Francesco is the one that's leading the effort of SSF and trying to work with all the proper signature aggregation mechanisms to have a safe SSF. There's also a lot of explained posts from Vitalik that explains like where we're at and what we need and what things might have to change in order to pave the way for SSF.

Nicholas: So all told, like how many people in the world are working on this problem or the various problems required to get to SSF?

Domothy: I don't know if that's quantifiable and in a meaningful way because it involves a lot of cryptography from outside of, like people use cryptography papers that are made by university professors that aren't even in crypto, but they are in cryptography. So it's hard to say who's working, like how many people are working exactly on SSF, but it's like a weird disjointed effort somehow gets coherent at the proto level, which is what we love about Ethereum, right?

Nicholas: There's developments in academic cryptography that are necessary in order to achieve this Ethereum roadmap goal.

Domothy: Yeah. So the biggest problem is getting signature aggregation in a way that won't overload homestakers from having to process too many signatures from all the other validators. Because right now, that's the trade-off that's been made between how fast we want to finalize the block and what are the bandwidths and computational restraints that validators have. And like we could make it way more intensive to process attestations and then we could have way more attestations in a single slot, but that would make it so that homestakers with like lower bandwidths aren't able to keep up and then you end up with like data center nodes, which is what we don't want to sacrifice. as far as decentralization goes.

Nicholas: As we move through this roadmap, does it matter if we go out of order, like in terms of the way the sections are divided?

Domothy: Oh, they're all parallel. They're all parallel, right. Which is something that Vitalik added. He added like the past and the future on the left and right because people were confused on the first version of the diagram that people thought there would be the merge and then the surge and then the verge and everything. But it's a bunch of efforts happening in parallel on all these different points.

Nicholas: Got it. Okay. So we talked a little bit about the merge, single slot finality, essentially finalizing the chain every block rather than every many blocks. Is there anything else? I mean, I'm sure there's so many different issues. I see here secret leader election, quantum safe, aggregation friendly signatures, all kinds of different issues. Is single slot finality the biggest thing people should be looking forward to? with regards to the merge?

Domothy: Yeah, there's single slot finality. before that there might be the question that someone asked about the max effective balance, which is the thing that's been talked about a lot lately. that's going to help reduce the number of validators but not the number of staked ETH. So that's going to have fewer attestations having the same overall weight. And that's one of the first steps that's going to help. single slot finality is getting rid of this maximum balance of 32 ETH per validator.

Nicholas: So that way you won't need to operate, you will need less signatures, essentially. It will reduce the number of signatures required. Okay. Interesting. Okay. By the way, as people are listening, if you have questions, feel free to send them to me or you can raise your hand and we'll get to Q&A in a little bit. Okay. So next up, let's talk about the scourge. So the goal here is to ensure reliable and credibly neutral transaction inclusion and avoid centralization and other protocol risks from MEV. What are the big challenges here? What's been achieved and what do we have to look forward to?

Domothy: Only the main, the biggest block is really in protocol PBS, which is now mostly known as ePBS. So enshrined PBS to basically take what's happening right now with MEV Boost and enshrine it in the protocol so that we don't have to rely on like middleware for validators. It's going to be the same block options at the protocol level. But there's still a lot of research happening in order to figure out the best way to do that and the safest way with all the trade-offs involved.

Nicholas: Can you explain PBS? Can you define it?

Domothy: Yes. So basically, it's right now, as far as the protocol is concerned, a validator, a staker, whether a home staker or like a big coin-based staker, Lido operator, they're supposed to create blocks alongside voting for other people's blocks. And the problem with that is that MEV just kind of popped up while we were doing this whole research on proof-of-stake. And then the best way to explain MEV is that the way you order transactions can affect how much value you can extract out of your block. And it's not something that's as fair as we thought it would be to create blocks. Because before MEV was a thing, the one strategy people would just do is take the fees and just order transactions by however much they're willing to the validators for inclusion. And this is the naive strategy, but it turns out it's not the most profitable. So what happened was that very expensive block builders would have the most optimized algorithms, the most bandwidth, the most computational power, and that would put home stakers at a disadvantage because they don't have this whole edge as far as extracting value out of blocks. So one solution was to separate this whole process. Now you have proposers that propose blocks and you have the builders that are... Right now they're outside the protocol. They're just trying to bribe the proposer with the best bid for their block to compete amongst builders. And now the best strategy as far as validators go is to just pick the highest bids. There's no optimized way to extract more value. You just look at what builders are offering and you pick the best builder. And that's something that home stakers can do regardless of bandwidth, processing power, or algorithms. So it basically takes all the centralization aspect of MEV and puts it in a little corner. And then we have the validator set that can stay easy, accessible, and decentralized. But this is all happening out of protocol. right now with MFboost, which is like the first step on the little roadmap diagram is extra protocol MEV markets. And the next step is going to be to enshrine that into the protocol to basically ensure that it's a fair and controlled market. that's happening within the protocol rather than out of protocol.

Nicholas: And is it, I guess, this is an easy one for us to get rabbit hole on. I'm curious. There's no way to, or I guess I don't understand enough about how the blocks are built. But PBS is the separation between which part, what is the proposer-builder separation? Maybe you could explain that a little more.

Domothy: So yeah, a proposer is just there. It just says, on top of voting for a block, like the history of the blockchain right up to that block, they're going to propose a block that has all the stuff in it, the transactions and other validator signatures. And we call them proposer because they propose a block. This is, hey, look at this block. And then other validators are going to check out the block, see if it's valid, see if everything checks out and then vote for it. That's kind of like the job of the consensus layer is to propose blocks and then attest to them, which is what a validator does. By the way, validator is like terrible terminology for what they actually do, but that's what we're stuck with.

Nicholas: So you were going to say, why is validator a bad name for that function?

Domothy: Right, so it makes people think that solo stakers or anyone who's a staker is exclusively the ones validating, whereas you can actually run a node, whether it's consensus or execution, to verify and validate transactions yourself, even if you're not staking. So it's kind of like, the term validator is a weird term for that because you don't actually need those three to validate transactions and follow the chain yourself in order to not have to trust anyone. I don't know if that makes sense.

Nicholas: Yeah, yeah, it does make sense. We had a question from Django from the audience who wanted to know about the Scourge topic. He asks, probabilistically, what will mainnet gas be exactly 10 years from now? Do you have a sense of what gas will be like in 10 years?

Domothy: It's either going to be very, very high or very, very low. There's a lot of speculation involved with this answer. So if we go all the way with L2 scaling, then the congestion is going to be happening on the data availability side of things, where while the actual execution is going to be congested and very expensive and it's going to be basically reserved for Wales institutions sharing liquidity that really wants the deepest and most secure layer. But on the other side of that equation, there's the idea of enshrining roll-ups at the base layer, which would actually scale execution at layer 1, 2, in which case gas would be cheap because it would actually scale alongside the data, which is something we're getting to with the surge, with the whole thing, sharding things. So it's very hard to say how long it'll take before we get an enshrined ZKVM at layer 1, because that's something we need to develop at layer 2 first.

Nicholas: So there's no, like, I guess some cynical outsiders might say that the motivation of EF and people working at the protocol level would be to increase the value of ETH by persisting the cost in getting block space. But the long-term plan is very much intent to make it as cheap as possible to put information on the blockchain, to use the blockchain, either via L2s or even some, as you're saying, like a ZK enshrined in the L1. Gas could very much, could possibly be cheap in the future.

Domothy: Yeah. And there's a, it's actually the other way around. If we wanted to maximize value and fee revenue, we would want, we would aim for maximum scaling in order to have, like, many, many billions of transactions per second at very, very, very low fees, which like an aggregate would be more revenue for the chain than having it congested, very expensive, but not many transactions happening. So that's kind of the goal is having high aggregate fees, but low individual fees.

Nicholas: Ah, I see. Okay. That makes more sense. Valholics also wanted to know if people want to follow the single-slot finality subject, are there specific resources that people should keep up to date with?

Nicholas: Let's move on to the purge. So the goal of the purge is to simplify the protocol and eliminate technical debt and limit costs of participating in the network by clearing old history. This is something that's a very hot issue for people who are doing on-chain NFT artwork. 113 asks a few questions. The first one is, how can very long-minded, long-term-minded artists who make EBM pieces where the contract is the art using things like SSTORE2 prepare for state expiry? What is the current thinking on state expiry? So I think for this, maybe we need to understand the difference between history and state expiry. Is that right?

Domothy: Yeah. So history and state are two different concepts. Having hard work inside the state sounds very crazy expensive to me compared to just having it in history. But either way, it's kind of a weird thing where the goal of a blockchain is to get everyone to agree on the order of blocks. But it's not really a scalable model where you pay once and then you have your data stored by everyone forever. Like if you think hundreds of years from now, having to put that burden on all the nodes to have all the entire history, it's kind of hard to fathom because that's just going to keep growing, especially with the surge where all this data from Rollup are going to be coming in.

Nicholas: So the history is to say the data of transactions passed, but not necessarily something that's available within the scope of a function, like a solidity contract function, for example.

Domothy: Yeah, exactly. So the best analogy I've heard from Vitalik is that the history is like the record. If you think of the blockchain as a country, the history is the record of everyone who's born, everyone who dies, everyone who dies and everyone who's born. And the state is the list of everyone currently alive. So if you have the history, you can follow along and update your own little ledger of this person was born, this person dies, so take it off the state. And then when everyone follows the same history, you arrive at the state that has the list of everyone, their age and things. So the history is useful to reconstitute the state, but you don't strictly need it because nodes can just boot up, spin with each other and ask each other what the state is without necessarily needing all the whole history.

Nicholas: Got it.

Domothy: I kind of dropped the analogy there.

Nicholas: So in the Purge development process, in the roadmap for the Purge, are people, like EIP4444 is mostly about getting rid of the history but not affecting the state? Yeah.

Domothy: So history is like, EIP4444 is basically not even a protocol level change. It's just at the P2P layer, nodes will stop being expected to be able to serve any arbitrary block. They can still make it available if they want or store it somewhere. But if you ask a node about a block that's over a year old, then they're not expected to give you the block as they are today.

Nicholas: So let's say in an NFT contract, if I mint an NFT and I wait a year or two years later, every node will still be expected to know to be able to query the owner of the NFT and return the result. that it's me, even if there's been no activity on that ownership within this period of time because they would have lost the history of the transaction, but the state is still in memory, essentially?

Domothy: Yes. So if you're talking about an NFT, so like an ERC-721 contract, the ownership is going to be in the state, so you're going to be able to query any nodes that tell you who is the owner of this NFT by querying the actual contract. But I think the question from the audience was about state expiry. If they want to log the actual hard work of the NFT, not just like ownership and metadata, like the actual JPEG inside the blockchain, they can either put it in the history or in the state, and history is going to be much cheaper than state because state growth is a big problem as far as decentralization goes and the burden on the processing.

Nicholas: Right. 1.1.3 is a blockchain artist focused on on-chain art, which is to say the art is generated by state, by functions and state, not JPEGs, but rather computational art that's generated by the smart contracts through their storage variables and accessed via functions. So what I'm hearing from you is that there's no plan to purge state, contract state.

Domothy: Yeah, if we purge it, that would be state expiry, which is on the roadmap. As you asked me at that meetup two days ago, I think it's mostly on the back burner compared to history expiry, which is much less complex.

Nicholas: Okay, so 4.4.4.4 addresses the history expiry. Yeah. Is there any IP for state expiry? For people who are making artwork and they really want it to exist in 100 years or even 10 years and it's in state, what makes you say that state expiry is on the back burner, basically?

Domothy: Because state expiry, in my opinion, is just very complex and breaks a lot of existing contracts and changes a lot of assumptions that developers have about how the blockchain works, whereas history expiry is just a simple peer-to-peer convention change.

Nicholas: Do you think it'll ever happen? State expiry?

Domothy: I couldn't say for sure, but I'm pretty sure once we have Oracle Trees and like enshrined PBS and history expiry, there's not going to be as much pressure to work towards state expiry as there once was when state growth was a big concern.

Nicholas: I see. Okay, so 4444 will reduce the burden on the nodes sufficiently. that maybe state expiry is just not as relevant? Yeah.

Domothy: Because I think stateless clients in conventions with the Oracle Trees is going to be much more interesting where you can have a client that doesn't need to store any of the state in order to trustlessly verify a block. So that alleviates a lot of the problems that come with state growth.

Nicholas: Got it. And are those the like client development? is that part of the surge?

Domothy: I wouldn't say it's part of surge because there's a difference between a like client and a stateless client.

Nicholas: Okay. Can you explain?

Domothy: Like a like client mostly follows the block headers without actually verifying everything. It relies on what we call trust minimize. So it's not entirely trustless. And this is something that happened in the Altair fork. at the top of the diagram in the merge. We add what we call sync committees to mainly help with like clients. This 512 validators I think rotated every 27 hours and they get more rewards. But they're in charge of signing block headers for the benefit of like clients. So they don't need to validate blocks to check the order of blocks basically. They just rely on these 512 validators whereas a stateless client would be like a full node today but without the state. But I can still doesn't need to trust these 512 validators. It can be fully trustless to check a block verify the proof about the state accesses and then compute the block normally to get the next block and verify that it's valid.

Nicholas: Got it. Okay. So and that's what's blocking us from achieving that from getting to stateless clients?

Domothy: The big thing is that we have this state right now. that's a Merkle tree. It's a Patricia Merkle tree which is branches of six. It's like a tree with a state root that has six children and every children has six children. to get to actual state in the actual like the values stored in the state. And the problem with that is that to have a proof that if you want to access a specific part of the state that's like value 0x5AB8 has a value of 256 then to craft a proof that's going to convince a client that only has the state root that's going to be a huge proof. So it's not realistic on a bandwidth level to have these gigantic proofs with just a state root. But once we have Merkle trees. not only are the proofs much, much smaller to prove state accesses but they can also be merged together in even shorter proofs. So that's the unreasonable power of Merkle trees.

Nicholas: Okay, so Merkle trees will allow us to achieve stateless clients because you can do proofs. they make it bandwidth reasonable to prove state without having to download the entire chain.

Domothy: So blocks will be self-referential. Basically, you will have the block that has the state root that the previous block had and then a list of proofs and the merge proof. And all your stateless client has to do is take the state root take the big merge proof and then validate that and it doesn't need the actual state for that.

Nicholas: Got it, okay. And to kind of round out this question from 113 if someone is trying to make a contract that will not be pruned from state given any of the foreseeable ways in which that could happen is there anything that should be included in a contract or principles by which someone should design their contract so that they can keep it in memory should any kind of state expiry come to pass?

Domothy: The idea with state expiry is unused. state would be pruned by nodes and then you would have to bring it up from somewhere else with the proof that it used to be inside the state and then it would come back to life inside the part of the state that's used. so it would be someone else's responsibility to revive that part of the state so that it becomes alive inside the non-pruned state that all nodes have. As far as developing around that it really depends on the actual implementation of state expiry which as I said is extremely complex and has a lot of trade-offs. but my best guess would be that the contract developer would have some sort of incentives built in for external people to revive that part of the state and like for example if you have a smart contract call that if you revive the state I'm going to give you 0.01 ETH and that's something that the smart contract handles and then that 0.01 ETH would become MEV and blockbuilders would see how I can just take that old state, revive it, get that money and it would be done basically automatically without the developer of the contract even if it's been a hundred years and the developer is dead.

Nicholas: Right, so to do that the function on the contract would have to make use of the, like bring the storage value into, I don't know manipulate it inside of an external function essentially?

Domothy: Yeah but the specifics really depends on the implementation of state expiry.

Nicholas: I guess the stateless clients is maybe more part of the verge. verifying blocks should be super easy. download n bytes of data, perform a few basic computations, verify snark and you're done. Is that where stateless clients fits in?

Domothy: Yeah they permeate the whole roadmap. but in the verge you get vertical trees, you get stateless clients which is part of the reason why I think state expiry is going to be probably on the back burner for a long while. with stateless clients and PBS you basically get state expiry almost for free as a cheat code where you can think of it like a semi stateless client. you start your client. it has no knowledge of the state whatsoever. it syncs the state with the consensus layer and then as it receives blocks it can check the state proof and then put that in their local memory. so they have a partial view of the state about the most often used parts of the states and the least used stuff would be naturally never brought up to that client. so it's kind of like state expiry but like in a cheat code way.

Nicholas: Right by default in the design of the node.

Domothy: Yeah.

Nicholas: I want to move on to the surge in just a second but first I have to mention our sponsor for today's episode. it's CropTop. Do you own a .eth name? If so you can use it to serve your own decentralized personal site served peer-to-peer with content feeds and revenue streams baked in accessible from any browser no emails, no passwords, no wallets required and you can check it out at croptop.eth.limo. it's a very cool project a friend of mine is working on. basically creates a site where you can mint every post on the site very easily and it's hosted on IPFS with a very nice Mac app. so check it out croptop.eth.limo. thanks to CropTop for sponsoring this episode. So for the surge, the surge is 100,000 transactions per second and beyond including roll-up bandwidth. I know that one step that's very important to this is data availability. Can you explain the problem of data availability?

Domothy: Yeah so as the name says it's where to know whether or not data is actually available but it's much more complicated than that once you try to get clever about it. because right now the way that Ethereum solves the problem of data availability is that every node checks everything and downloads everything which is like a bottleneck for scaling. because if you try to scale by increasing that amount of data increasing bigger blocks then small nodes can't keep up. So instead we have to get really clever about it using a lot of fancy cryptography and math to make it so that your node only has to verify a small amount of data and be convinced that the whole entire data is available.

Nicholas: When you say data is available you mean like blockchain history or state?

Domothy: Or an entirely different thing secret 13 which is the blob space from EIP-4844.

Nicholas: We can dip into that for a second. The idea here is that state that L2's write to L1 should be segregated from the rest of the transaction data?

Domothy: Yes and it should be cheap. Those are the main goals.

Nicholas: So basically like L2 the concept of a roll-up is going to be inscribed directly into L1 rather than just using traditional transactions, smart contract architecture to write data to L1.

Domothy: A fully trustless roll-up would be. you can think of a smart contract on layer 1 that validates batches of transactions happening at layer 2. and to have the same trustlessness guarantees layer 1 has to enforce that the roll-up sequencer can't censor you or can't change the state arbitrarily and can't mess with your funds can't censor transactions and to do that you need to have all of the roll-up data available at layer 1. Again it's a big problem right now because we don't have any cleverness to scale this data. So it's easier to.

Nicholas: Yeah it's going to be segregated.

Domothy: Right now the roll-ups they post this data in what we call call data. so they make a smart contract call to their contract and they put all the data inside the transaction so it doesn't actually get processed by the EVM if you think of something like Arbitrum or Optimism. They put all the data there for everyone to see and if there's a if one of the sequencers tries to cheat then you're going to have. you're going to have someone that's going to snitch and like make a fraud proof once we have no more training wheels on roll-ups of course. Am I getting ahead of myself here? No no no that's perfect.

Nicholas: That's perfect. Okay so the data availability problem is to know. maybe can you rephrase it again it's to make sure that who has what data specifically?

Domothy: Basically whoever wants the data can get it. so nodes can't collude and try to get you to follow a chain a fork of the chain that has no data available. I think the best starting point is really optimistic roll-ups because that's the best way to conceptualize. it is that you have sequencers that they claim that a certain set of transactions happen at layer two and they claim that it's valid and the layer one smart contract is going to say okay I'm optimistic about it. I'm going to accept this change in the roll-up state without actually checking and executing these transactions. So that's why there's a delay to withdraw. so after a week if nothing contested these transactions in this batch then the roll-up accepts it and now you can process a withdrawal and get your funds out of the roll-up. So a big component of that is fraud proof. so if a sequencer actually does send a transaction that says okay your funds on arbitrum now they're mine like arbitrarily. you had one ETH. now I have one ETH. I'm going to commit that to layer one. that's going to be an invalid transaction right? So anyone can see that data happening can see okay that's not legit. I'm going to make a fraud proof and then switch to the L1 contract to penalize the sequencer and not get my funds stolen from me. But in order to do that the data needs to be available. like if you're checking the chain for these invalid transactions and the data is not available if a bunch of validators are colluding. they're saying we're voting on these blocks but there's no actual data there. no one can make a fraud proof. we're going to steal a bunch of money from the roll-up. that would be a big problem right? That would be extra more trust assumptions than we would like.

Nicholas: So this is data availability of the L2's chain history.

Domothy: Yeah but as offered by layer one.

Nicholas: Okay I see. So who is or is Danksharding a solution to this or is it?

Domothy: Yes.

Nicholas: So who is Dankrad? what is Danksharding and how does this all lead to 4844?

Domothy: So Dankrad is a researcher in the Ethereum foundation also in the research team and he came up with Danksharding. that was a completely alternative way to do transaction sharding which only does data sharding. so execution sharding is like off the table for now and we're focusing on data sharding for these L2's. So like I said earlier that's a big bottleneck for stealing is you can increase the amount of data but if every node needs to store every data to make sure that it's available for everyone that's a big problem. so instead we do clever stuff to shard that data. so every node has a small piece of the data but it's also constructed in such a way that just a small amount of data. you can query a node and say okay what's this part of the data? and by doing a little bit of extra computation you know that because you receive these samples of data and they all check out together they're all consistent. therefore the probability that the whole entire data isn't available is extremely low like 1 over 2 to the 128 like that's sort of astronomically low probabilities.

Nicholas: Okay so it's a way to guarantee that the data is available.

Domothy: Yeah so validators couldn't cheat a rollup sequencer couldn't commit a transaction withhold the data and have validators vote on that block as they would say I just checked with the sample things and it doesn't check out. so I'm not going to vote for your block.

Nicholas: I see.

Domothy: Yeah so the rollup sequencer couldn't get away with posting an invalid transaction and hiding the data.

Nicholas: So it's a cryptographic solution to data availability at the time that the rollup proofs are committed to the L1 chain.

Domothy: Yeah pretty much.

Domothy: Yes. So it's all polynomial. Yeah the KZG ceremony is a way to remove the trust that's inherently necessary for these cryptographic constructions. So it is what we call a trusted setup. So there has to be one party that generates a random data and then encrypts it and then you have to trust that this party threw away the data, their secret data unencrypted. So we don't want to have this trust embedded at layer one, right? That would be bad. So instead what we do is we merge the random value of everyone and they're all encrypted so that you only need one person to be honest in throwing away that data for the whole system to be secure and corruptible.

Nicholas: And that's still ongoing. or is the KZG ceremony over?

Domothy: There was a second round earlier. I think it's still ongoing but everyone who wanted to contribute, yeah. The sequencer is online right now. There's 111,000 total contributions and zero participants in the lobby. Okay. Looks like it's still ongoing for 16 more days.

Nicholas: Oh that's a good time to go. visit the page. ethchad19 wanted to know how long before all scaling related upgrades are implemented? Big question. And maybe more reasonable to answer, do you expect 4844 to go live this year?

Domothy: So for the first question how long until all the scaling solutions? That includes full think sharding. It's such a long time from now that it's impossible to say for sure. It's good. There's still a bunch of networking things to figure out to make it safe. But 4844 is the very first step for that. And I'm pretty sure it's going to be implemented this year as seeing all the progress happen. It's basically all implemented and being tested on DevNets and now it's the usual coordination fork between all the clients. Amazing.

Nicholas: Okay let's move to, I think this is the last one on the roadmap list at least, the splurge. So this is to fix. everything else is the goal. And maybe one of the bigger pieces of this subsection of the roadmap is account abstraction and 4337. Could you talk a little bit about how I mean? I know 4337 is sort of a EIP to whereas on other chains it'll be integrated directly into the chain and maybe eventually directly into L1. It's an ERC.

Domothy: It's an ERC that's not implemented in the layer one so you don't actually need protocol changes for that. It's smart contracts talking to each other the same way. ERC20 works basically. It's just a standard.

Nicholas: And maybe I guess for people, can you explain account abstraction a little bit and then we can talk about what it means for it to be integrated into the chain?

Domothy: I can explain it a little bit but it's not really my area. But basically it's to fix the problems with the crypto UX of private keys and private seeds and everything. So you want smart wallets that still do everything for you and keep your money secure and not in a custodian way but still having guarantees for something like social recoverability. If you put trusted people that you know you need three or five of these people to recover your money if you mess up and lose your keys or if anything happens. But the first step for that is having an actual standard for how these smart wallets are supposed to interact with each other. And this is what ERC4337 does. So it's an ERC. It doesn't require any protocol level changes. I know there was an EIP that would do a protocol level change to pave the way for account abstraction but that never came to life. And one of the selling points is basically being able to pay for gas in any tokens you want by having this abstract mempool for just 4337 operations. I'm still rambling now.

Nicholas: It's interesting. Do you think that the in-protocol enshrining of account abstraction is likely or not necessary?

Domothy: It's something I've only heard from Vitalik. And there hasn't been much effort that I've heard that I know of but it's like one potential path toward in-protocol account abstraction is starting with 4337 and then having a way to turn any normal address wallet into this 4337 format and then enshrine that in the protocol. It's kind of not something I hear a lot about. I know Vitalik has written a few posts about that.

Nicholas: That was the first time I ever heard of this idea of converting EOAs into 4337 style account abstraction wallets. That's a very cool concept. Yeah.

Domothy: And I'll be honest, writing that document was the first time I learned about it too. Just like trying to parse what Vitalik meant in his roadmaps.

Nicholas: In the section of the roadmap breakdown that you give, you also talk about the Ethereum endgame post that Vitalik wrote and this concept of an Ethereum endgame. Can you maybe, as we sort of finish up a little mosey down all these different subsections of the roadmap, what is the Ethereum endgame? How do you even conceive of that? Is that something, obviously it's so far away it's hard to put a date on, but maybe can you describe what it is and how we get there roughly?

Domothy: Well, the endgame would be the way the protocol looks once you don't need to upgrade it to anything. Something that's safe to quantum computers, so basically all the cryptography needs to be replaced by something secure. And all the scaling is done so you can, all the scaling that doesn't sacrifice any of the decentralization and trustlessness guarantees that we love, but still scales to billions and billions of TPS. And I would say the TLDR of that post on Vitalik's website about the endgame is that we extract scalability from very optimized, centralized components, but in a way that makes it so that they can't steal your money, they can't censor you, they can't steal your funds, they can't mess with the system, but you still get the benefit of efficiency for scaling. So you still don't have to trust anyone, and if they ever go out and there are fallbacks, it's all decentralized the way we want. Got it.

Nicholas: Jin in the audience wanted to know, firstly, Jin's a fan of HackMD and loves that ETH Foundation uses it. They wanted to know if do you use the book and slide features at the Ethereum Foundation?

Domothy: I think some people do for talks, but usually it's some other side things. But usually it's just ready for quick notes and documents. But I haven't seen much use of the book thing.

Nicholas: Jin also wanted to know how much of the org is remote?

Domothy: Almost all of it. I know there's an office near Denver and one in Berlin I think, but most people are just remote. They work from wherever they happen to be at that time, if they're traveling or if they have a home base.

Nicholas: From what I understood, the physical locations are just like if there's enough people in a certain place, they'll rent a co-working space or something. But it's not a major physical... Yeah.

Domothy: Like during East Denver, there was the office in Boulder and people would just take the bus there or carpool between the conferences and the office. But otherwise it's all remote.

Nicholas: Very low-key. And Jin also wanted to know Jin's a big VR XR person and wanted to know if there's any virtual HQs? Any metaverse spaces or anything similar?

Domothy: Not that I'm aware of.

Nicholas: Is there a Slack or Discord or something that people hang out in?

Domothy: Mostly just Telegram for coordinating.

Nicholas: And Jin finally wanted to know if they can donate a 3D model of the ETH logo to the Ethereum Foundation somehow.

Domothy: He would have to check with the eth.org team. I don't really know much about this team other than they run the GitHub for the actual Ethereum.org website. So if you look into the GitHub, there's probably the right people to contact for donating a 3D model. Maybe you can just make a pull request and just have it approved.

Nicholas: Okay, so we've gone through, I mean obviously it's impossible to talk about this roadmap properly. And I guess one of my biggest questions is building this blockchain and this whole ecosystem, it's not in design and engineering. often simplicity is lifted up as an ideal of good design. But there's a lot of moving parts here. It's not really very simple. Do you have any thoughts on what it means to build something that's good but also so incredibly complex? Do people think about simplicity ever or is it more about solving specific cryptographic and game theory problems?

Domothy: Yeah, there's a lot of ideas that are basically non-starters just because they wouldn't involve way too much complexity and attack surfaces and potential things that are exploitable that we just can't think about because there are too many moving parts like you said. So one way to tame all of that is to modularize everything. So everything is like its own little module that says limited responsibilities can be tested and can be simple and then all these parts are going to interact together. But yeah, it's a pretty big responsibility to try to keep everything in mind when you're trying to fix problems and develop new solutions.

Nicholas: But basically modularization is kind of the answer is what you're saying. Yeah,

Domothy: which is a big thing about the L2 centric roadmap is you can just let the rollups fight amongst each other to figure out the best model, the most secure one and then once we have that, especially with ZK rollups because ZK is its own beast in itself, we wouldn't want to start with a ZK EVM at layer one and then hope there's no bugs. So you can just have rollups figure it out and have their own little training wheels and then progressively get more secure over time and then we can look into taking one of these ZK EVMs and enshrining it.

Nicholas: That's very interesting because you could imagine someone making the opposite argument that it would be best to start from scratch with a ZK EVM or a ZK blockchain, but what you're saying is actually the security implications are hard to tangle with. so it's better to start with something simpler, easier to understand like Ethereum. Yeah.

Domothy: Of course if we had to start over while somehow keeping network effects, I'm sure people who work with the EVM are more aware of me about all the technical debt that you can't just get rid of. but there's a lot of hindsight things that would probably be done different if we started while knowing everything we know now.

Nicholas: Is it possible for you to quantify how much of the attention of the research team or in general the people working on the roadmap let's say directly inside of EF how much is dealing with technical debt versus building out new solutions to problems?

Domothy: I couldn't really say.

Nicholas: Is it a lot?

Domothy: It's always in the back of the mind but I work with the consensus team. so like the proof of stake specs are more isolated and newer than something like the EVM. so I'm not really aware of technical debt as far as the EVM goes. I'm not really into that whole little area.

Nicholas: Relative to DAOs where I have some experience and lots of people have had some experiences over the past couple years, Ethereum core protocol development seems at the same time higher quality than certainly the average but even the best of DAOs have trouble achieving the kind of quality we talked about the merge going essentially flawlessly although maybe it's a bit slow. or maybe some people perceive it as slow perhaps because they don't understand the challenges that are being faced by the researchers. but I'm curious about this decision making process. I have all kinds of questions about this but overall is there something that we can draw from how Ethereum Foundation and Ethereum development in general works? Lessons that could be applied to projects in the ecosystem that are trying to organize themselves also that should be learning a lesson here. is there anything that comes to mind that's like if only DAOs understood that this is actually the key or some way of approaching the problem?

Domothy: I'm not really sure specifically how DAOs typically work but the core protocol things can be pretty slow because there's a lot of testing and implementing especially with all the different client teams that have to coordinate with each other to develop the thing in a consistent way. so it can be slow but it's worth it for the extra resiliency that we get out of it. but I'm not sure how to actually translate to DAO because my mental model of DAOs is like token voting. I haven't looked into it much more than that yet later.

Nicholas: I think many of them are token voting. maybe let's break this down. how do topics emerge? How do things get onto this roadmap? Of course Vitalik makes a post but there's got to be some steps before Vitalik has fully synthesized the thing into the version that we all learn about through his blog posts. Is it just Ethereum magicians or how do these issues come up and get discussed in the first place?

Domothy: It usually starts with just someone saying hey what if we did this? and then talking to other people and then something on each research gets drafted and it brings more attention and then eventually an actual EIP is going to be drafted for that. It doesn't have to involve Vitalik as your question seems to imply. it's just. he's very knowledgeable but overall it's just some guy in the research team. is what he's hoping to be. and then for the general public? yeah I try to be one of the people that digest it for the general public. but other than that there's the EIP and then there's going to be Ethereum magicians arguing over specifics in the EIP. if everything breaks something else someone's going to bring it up and everything happens out in the open. that's for sure. it's not just the EF it's really. if we do an EIP and then it breaks something for say Uniswap then the Uniswap team is going to be like hey what if? maybe you can do it this way instead? it still achieves the goal but it doesn't break anything for us or other developers and then eventually it gets to a point where it has to be implemented where that's the all core devs are going to be debating whether it's a priority or if the client teams can squeeze it in before the fork or if it's worth delaying forks for that and there's a lot of debating happening with these different client teams. it's not rare that sometimes even Vitalik can just get vetoed by one of the client teams that says no we have to prioritize something. a lot of it happens with the merge and withdrawals. at some point there were no more things to do and it was all hands on deck on the merge and Vitalik wanted to have EIP 4488 I think is the number for as an easier way to make call data cheaper for rollups. and then it just never happened. it was just not a priority compared to the merge and withdrawals.

Nicholas: So you mentioned ETH research like Ethereum Magicians is like a discourse right a forum. what's ETH research?

Domothy: ETH research is like more formal but also not as formal as a research paper let's say. but it's people making like higher effort posts about any sort of research proposal about whether it's core protocol or smart contract standards or anything. there's really a lot of very specific information about a lot of topics on the ETH research. so that's it's like ETH research. the URL is .ch so ethresearch.ch.

Nicholas: okay I see I see that's.

Domothy: yeah that's a forum. and Ethereum Magician is more about coordinating on EIPs as far as I'm aware and discussing standards.

Nicholas: okay okay. so in the context of one of these discussions let's say someone's made a proposal some discussion has happened maybe started in Telegram. maybe they write a post on Ethereum Magicians or maybe they write something more serious. it gets to ETH research. How did the technical choices get argued about? How is it possible that decisions are made in such a flat decentralized fashion?

Domothy: It really depends on the specific things that's happening. but the actual implementation and technical things come down to the specs. so there's the for the consensus layer. there's a whole repo of the beacon chain specs that are written in Python with the only goal of being readable by humans. like if you know some Python you can follow along with the specs without being bogged down with like peer-to-peer stuff and networking and then the actual clients can implement the specs like this in whatever way they want to optimize for whatever they want like you get Aragon that optimizes for archive nodes and other clients that optimize for fast sync and things like that. but as long as they follow the same specs and all these clients come together and agree on the blocks and the specs involve hard forks. so like starting from this block it's going to be a slightly different set of rules for validating blocks and all the clients agree to fix this. and then all the node runners like solo stakers node operators they just update their nodes to agree with this fork and so far there hasn't been any very contentious forks. so it's something. if there's contention in the community it's going to be happening in the process way way way before there's an actual fork. so nobody wastes their time implementing something that people don't want. so that's kind of a way that decentralized consensus happens.

Nicholas: So it happens primarily in the definition of the spec stage.

Domothy: Yeah pretty much.

Nicholas: Or before as that's being shaken out. Yeah.

Domothy: And that's why it can take years before getting to actual maintenance.

Nicholas: Right. So this is my question. Like for instance for the merge were there any? I mean they have the financial motivation to do so but my impression is that miners were not subsidizing developers to infiltrate the process of the development of the merge to try and shift it somehow in their favor. Is that a correct perception?

Domothy: Well yeah pretty much. the miners are basically like employees for the chain and like the community wants proof of stake. so even if all the miners didn't want it then we would still have it done with the way it was set up with the the transition. that wasn't an actual block but was a total difficulty. so like miners mine till the very end and there has been forks that died that died off very quickly. they were not wanted to like a fork of Ethereum that stayed on proof of work.

Nicholas: I guess what I'm trying to get at is like this industry and blockchain design is so set in mechanism design. that's relative to game theory and incentive design. and yet it would seem that there would be a huge incentive to manipulate the process of Ethereum spec development. and yet my impression is that somehow it's very second hand. so maybe you can correct me here but my impression is that sort of the best technical answers win generally. but there's a huge financial incentive for people to even just throw it off or just try and ruin the chain or slow down development or get something that they want in. I know people complain sometimes about Uniswap manipulating, getting their way in EIP development which maybe is an exaggeration and not accurate or this issue of miners potentially having some obviously having some financial interest in avoiding or delaying the merge. Do you have any thoughts about why that doesn't happen or if it does how it gets rooted out eventually?

Domothy: It really comes down to community efforts most of the time because like I said it happens out in the open. so if someone tries to come on and like a stakeholder that says let's do this change because if it's obvious that it only helps them and nobody actually wants to change then it's going to be spotted very quickly and it really it can come down to a contentious hard fork. but it usually doesn't because they're so hard to coordinate and actually happen. so it's also kind of secondhand information for me. I'm not really an expert in the history of these things but I know there was ProgPow back in the day that people didn't really like and there was a lot of drama and then it just didn't happen.

Nicholas: Right. It's interesting because often the argument is made like well you have the opportunity to exit through forking but in practice it's very very difficult to pull off. so it just seems like there's if there is something about the Ethereum open source development strategy that makes it so that despite an incredible financial bounty for messing with the system one way or another, people aren't able to do it. It seems like there's some lesson that should be drawn from that and applied all over open source especially within like EVM native ecosystem development like DAOs often trying to build open source projects. but it seems to me like no one has really synthesized or at least I haven't heard. how is it possible that this process works despite all the interests that would want to destroy it.

Domothy: I think the naive answer really comes down to community members being passionate about the same things and being aligned because if there was only one client with five developers then yeah there would be a concern that you can bribe them to tweak things slightly. put a shady update that changes the rule and nobody notices. but there's 10 clients happening like basically five on the consensus layer and five on the execution layer. and if you want you have too many people to bribe and it's all open source and like GitHub repos and pull requests and somebody's going to snitch just like any open source project. but you would have to pull unreasonable forces to bribe everyone all at once and keep everyone silent because these developers actually developing the code for these clients are also people who are passionate about Ethereum and are aligned in the goals.

Nicholas: When moving towards the development of a spec when moving along that path is there a notion of voting or how is consensus achieved for defining a spec?

Domothy: There's no really voting. it's just people arguing with each other about whether it's like high level things about what should be done and low level thing about actual lines of code and the specs. but it's really people debating out in the open and basically if you have to convince people that changing something is a good idea the EIP process in a nutshell is that technically anyone can submit an EIP and then be in charge of convincing other people. it's a good idea. But in practice it usually comes down to the more knowledgeable developers namely Vitalik that knows a lot about things and can articulate easily why things are a good or bad idea.

Nicholas: So ultimately you're just trying to get buy-in from the client developers. Are they the ones who are deciding on the spec? It's a community of people but before the spec is completed it's not the people who are going to be implementing the clients who are, maybe they're knowledgeable and they're participating in the conversation but it's not their defecting from your plan. that's much later in the process once the spec has already been approved of by the majority, right? So it's just about articulating a change that people believe is worth taking up and they believe that node developers will also adhere to. It's very surprising that that works.

Domothy: Devs are one part of the equation and it's basically the same process as for Bitcoin except with a little bit more clients and more stakeholders and of course more community willingness to do hard work. but it's basically the same governance process. It all happens off-chain and it's a consensus, a community consensus and then there's the devs and the people who actually run nodes whether they stake or not. and same thing for Bitcoin with mining. but it comes down to community mostly.

Nicholas: It's incredible to me. It's incredible that there aren't more like VCs involved and trying to manipulate things in favor of their portfolio companies, etc.

Domothy: Well, they're all at the app layer.

Nicholas: It sounds like it's almost insulated by how nerdy it is and how not directly profitable it is. to muck around at the protocol layer?

Domothy: Yeah, kind of. I would say the philosophy of the Ethereum Foundation is addition by subtraction. so rather than doing things from the top down, you just try to find a way to get community people to come in and solve specific problems without being in direct control of the EF.

Nicholas: It remains an enigma to me that this whole system actually works and that the development of the EVM is so... It's a social process, obviously, but what's surprising is that it's not so easy to articulate why it actually works. You could just as easily imagine another organization with similar design that does not achieve what Ethereum Foundation and all core devs, etc., EIP process, what the whole ecosystem is able to achieve. It still feels to me like something is eluding my grasp here on how it actually works. But it is fascinating and very impressive. If people want to get involved with Ethereum Foundation, did I see that there's a grant program for people who want to become a fellow or something like that?

Domothy: There's a fellowship program, yeah. I'm not too aware of the specifics on that one. But I think there's a new cohort that's been selected or it's about to be selected, I'm not too sure.

Nicholas: But in general, if people want to keep up, I know we talked a little bit about Daily Gui having good summaries of what's going on, is that the best resource?

Domothy: Yeah, he's an extremely good resource. If you're not on Twitter too much, then you can just go on every day on his YouTube video, follow the timestamps, follow the tweets and follow the links. It's a huge time saver for anyone who wants to keep up without being 24/7 on Twitter.

Nicholas: And I guess realistically, even 24/7 on Twitter, you really can't follow all the details of all the different projects and teams and obstacles. So there's that. I know there's also All Core Devs Call, which is streamed on YouTube. That's more in the weeds, I guess, than summaries. Are there any other regular things that people should be checking out? I guess they can check out this E3 search. E3sir.ch and Ethereum Magicians for EIP stuff. Are there other resources that you think of when you think of where people congregate or what's good to keep up on? if you're super into this stuff?

Domothy: I think that's pretty much all of them. And as far as keeping up goes, otherwise, I'd like to recommend Ben Edgerton's book, Upgrading Ethereum. That's where he goes through the Ethereum specs in a way that's readable and explains the historical reasons why the specs were chosen that way and all the trade-offs involved. It's an amazing, huge resource. that's about everything proof of stake for Ethereum.

Nicholas: Wow, that sounds great. Okay, I'm going to definitely read that. Dom, this was a wonderful conversation. Thank you so much for coming and sharing. I don't know if anyone, you know, maybe we have a couple minutes. if anyone wanted to ask a question directly, you can request before we end the show. If not, we can call it there. Does anybody have any questions last minute? It can be simple questions, too. It doesn't need to be gigabrain.

Domothy: Yeah, don't put me on the spot either.

Nicholas: All right, I think everybody is satisfied with our conversation. Dom, this was great. Thank you so much for coming through and answering all these various questions.

Domothy: I ran into you at that meetup.

Nicholas: Yeah, that was awesome. Both in Montreal, so that's fun. If anybody is interested in this, definitely check out Dom's pinned tweet, which has a great summary of what's going on in Ethereum recent, past, present, and near future. Thanks again, Dom. This was great. Thanks, everybody, for coming to listen. See you next week. Hey, thanks for listening to this episode of Web3 Galaxy Brain. To keep up with everything Web3, follow me on Twitter @Nicholas with four leading ins. You can find links to the topics discussed on today's episode in the show notes. Podcast feed links are available at web3galaxybrain.com. Web3 Galaxy Brain airs live most Friday afternoons at 5 p.m. Eastern Time 2200 UTC on Twitter Spaces. I look forward to seeing you there.

Show less

Related episodes

Podcast Thumbnail

Derek Chiang, CEO of ZeroDev

7 December 2023
Podcast Thumbnail

Ethscriptions with Tom Lehman

1 August 2023
Podcast Thumbnail

Hilmar Maximilian Orth, Founder of Gelato Network and Arrakis Finance

23 January 2024
Ethereum's Roadmap with Domothy