Web3 Galaxy Brain 🌌🧠

Subscribe
Web3 Galaxy Brain

Scott Sunarto Built a Toy L2

2 October 2023

Summary

Show more

Transcript

Nicholas: Before we get started, if you love this episode, please write a review of Web3 Galaxy Brain. Thanks to listeners like you, we've more than doubled the number of reviews in Apple Podcasts and Spotify over the last month. Thank you. Welcome to Web3 Galaxy Brain. My name is Nicholas. Each week, I sit down with some of the brightest people building Web3 to talk about what they're working on right now. My guest today is Scott Sinarto. Scott is the founder of Argus Labs, a decentralized gaming company. He was also a contributor to Dark Forest. On this episode, Scott joins me to discuss his cookie clicker rollup, a toy L2 he built to better understand and demonstrate the architecture of optimistic rollups. We discuss optimistic and ZK rollups, censorship resistance, off-chain execution, sovereignty, the spectrum of EDM equivalents, and much more. If you're interested in learning more about L2 architecture from first principles, this episode is for you. My thanks to First Mate, who provided the recording studio for today's episode. If you're creating NFTs and want to run your own branded secondary market that aggregates listings across all NFT marketplaces and enforces NFT secondary royalties, check out First Mate at firstmate.xyz. As always, this show is provided as entertainment and does not constitute legal, financial or tax advice or any form of endorsement or suggestion. Crypto has risks and you alone are responsible for doing your research and making your own decisions. Welcome, everybody. Scott, how's it going?

Scott Sunarto: Yeah, doing good. How are you?

Nicholas: Great. It's like the middle of the night for you, isn't it?

Scott Sunarto: Yep. 1 a.m. right over here.

Nicholas: That's how we go hard in this space.

Scott Sunarto: How are you doing? Yeah, doing good. It's like, I'm just like traveling a lot these days, like kind of like at the end of my kind of like Asia trip. So hopefully we'll be able to go back soon and adjust back to my normal sleeping schedule. But like, yeah, we make do for now.

Nicholas: What's the? what's the Asia trip and all about?

Scott Sunarto: Yeah, so like. we kind of like went to like Creative Blockchain Week and we kind of like did a hackathon for like World Engine, which is like kind of like this like kind of gaming rollup SDK that we built that has kind of this like sharding system. We have custom runtime for kind of like game and all that. And that's really kind of like the source of inspiration for like kind of the cookie-cutter rollup that we were working with. So like, of course, like the World Engine is like much more complex and like much more powerful than the cookie-cutter rollup. But like the whole idea with the cookie-cutter rollup is to like completely distill like a rollup to its like simplest component such that like people can like just go to the code base and kind of like. kind of like basically build a good mental model of like how rollup truly works instead of just like seeing it as like this like magic software that can like do decentralization and be boop and all that. And so that's really kind of like the whole inspiration. I was just kind of like, kind of like bored. And I was like, okay, like, yeah, let's like do this.

Nicholas: So badass. Okay, so tell me tell me about the cookie-clicker rollup. I'd love to talk about the rollup. I guess I've been thinking of it as like a toy L2. Is that an appropriate way to describe it?

Scott Sunarto: Yeah, yeah. Yeah, I think that's like a kind of like a very accurate way of like thinking about it. Right. I feel like people like I feel like a lot of rollups are from the right now from the get-go are like very, very complex because they're like basically trying to simulate how like the Ethereum L1 works. Right. So like Optimism and Arbitrum, they're basically kind of these like smart contract rollups. They're like basically emulating like how Ethereum L1 kind of like works essentially on a layer or two. But that's not how all rollup needs to. kind of like default. Right. You can basically use the same principles that were used to build rollups like Arbitrum and like Optimism to build much, much simpler rollups. And like in this scenario, like I basically built an entire blockchain, an entire like execution layer that only does one thing. It literally only increments the cookie like integer, cookie click kind of integer count by one every time you submit a transaction to the chain.

Nicholas: Amazing.

Scott Sunarto: And so that's like basically what the entire chain does. It just like does nothing else. You can't deploy smart contracts. You can't like do anything else. Literally, it can only click. You can only click the cookie and nothing else. Back to basics. And so like, yeah, so by kind of like distilling that, like you can actually do things that kind of traditional rollups struggle to do. Right. Because of the sheer complexity of it. So like one of the things that like, for instance, with Optimism, when they're trying to do things like fraud proving, right, they have to use this like very complicated, like basically fully on chain, like MIPS interpreter or like Wasm interpreter to kind of really dissect the execution of like, kind of like the EVM code, like on chain, because it is like very complex. You can't do it by kind of just like transcribing the entire like, kind of like, you know, get into like solidity. And that's like what they need to do. However, if your rollup is like very simple, like let's say it's like a cookie clicker, right? It literally only increments the number by one. There's like nothing stopping you from like literally transcribing the entire execution layer logic into like solidity. And that's like, as a result of that, you can basically build like a rollup that like kind of like has like single shot fraud proving that even like Optimism and Arbitrum cannot have, because like, you know, the rollup is just like so simple. And that's kind of like a really kind of interesting kind of like way to kind of like learn why kind of like rollups make certain architectural decisions, right? I think like, like Optimism pre-Bedrock have a very different architecture. They were trying to go for like a single shot fraud proving kind of like architecture. And eventually they realized that was probably not like a good direction to go to because kind of like, it's not as EVM equivalent as like the current kind of like the current way of doing things with like Bedrock. And so that's kind of like where I think there's a lot of these kind of opportunities kind of really kind of like essentially kind of like go through the same like architectural decisions that like rollup developers have to make. that otherwise wouldn't really be obvious. because like you've never really tried to kind of position yourself in kind of like the same shoes and having to make that those architectural decisions. And like, yeah, I think that's really kind of like a lot of these like super interesting tidbits.

Nicholas: Amazing.

Scott Sunarto: And I think like there's also like a lot of these like kind of like subtle nuances. I think like you only realize when like when you're trying to run your toy rollup and it starts breaking down and then like, oh, damn, like, you know, apparently it's like not as like as obvious. It's like kind of like because like sometimes when you're reading like Twitter explainers of like how rollup works or like or you're reading white paper of like how rollup works, a lot of these oftentimes like make assumptions of like that like the implementation details are like perfect. But like as in all things in like software engineering, there's always like these kind of like subtle nuances or like things like, OK, you're submitting, like, let's say a transaction to like, let's say an Ethereum node. And then like you need to like wait until like those transactions get processed. Right. In this case, the rollups transactions are like most often than not, like call data bundles that the sequencer created. But like what happened? if like for whatever reason, the Ethereum nodes drop your transaction? Maybe you don't have you don't pay enough gas or maybe like for whatever reason that Ethereum node is not gossiping your transaction into like a mempool properly. Right. And so there's a lot of these kind of like kind of warms that I think like more people ought to kind of really kind of like spend more time thinking about, especially as like a lot of people want to build their kind of like their own rollup. But like a lot of these like I want to.

Nicholas: Yeah, yeah, totally. Yeah. So like maybe to start off, let's assume that the audience is like clever. Maybe they've written some solidity. They know enough about the EVM, but they don't really deeply know an L2 architecture. So like what is an L2 really? Yeah.

Scott Sunarto: So like a layer two, man, I'm going to trigger a lot of like war. Because like no one seemed to be really able to agree on like what a layer two is. But like to me, like a layer two is really like, like basically kind of like a kind of construction of like a blockchain that allows you to inherit purity of like an underlying chain. Right. And I think the key, the key word here that is like very, I think like loaded or like very overloaded right now is like, what does it mean to be like kind of secured by an L1? What does it mean when a blockchain provides you with security? Right. I think there's like, what does it mean to be like as decentralized as the L1? And I think there's a lot of kind of really kind of like, kind of like concrete properties that we can actually try to, I would say, like measure or try to kind of like verify. Right. For instance, like censorship resistance. Right. These are kind of a property or a characteristic of blockchain that is like oftentimes like people like want, especially for things like payment, right. For like storing your assets or anything of value, you would want to have something that is censorship resistant such that like you can like, let's say if like one kind of like, you know, let's say one operator of the rollup starts censoring your transaction, there is an alternative way for you to kind of recover your like token for instance. The other part is of course like the kind of like integrity of computation. You want to make sure that let's say if the rollup says that a token, like let's say 20 tokens are moved from like account A to account B, that that computation is actually valid instead of like being completely made up. Right. In this example of a cookie clicker, let's say the chain, like say that a cookie is getting incremented from like five to like 699, 6969, but like no transactions from the users actually indicated that like the cookie needs to be incremented to that number. There needs to be a way for like, you know, like the users to be able to challenge that. And then basically for layer twos, this is like what the L1 provides, right? Like it could provide them with censorship resistance guarantees, but it could also provide them with this computation integrity guarantees to that layer two. Now the question or the benefit now is like, of course, the fact that layer two scaling solutions do not need to be fully ran on top of L1, right? And that like basically allows them to scale by running all of these blockchain execution off-chain. And so I know like this is not a word that like people like to use when referring to rollups or like layer twos, but like rollups and layer two is basically off-chain computation, right? It's like the idea of off-chain just means like relative to the L1, all of these execution of like blockchain transactions and computations are off-chain relative to the L1. So it doesn't conjure the layer one itself. Only things like ZK verification or like optimistic approving is like done on the layer one itself. So the rest of these, like, let's say like, oh, you're trying to transfer token A from account one to account two, all of those things happen off-chain and doesn't require you to execute that computation on the layer one. And that's like how layer twos are able to scale.

Nicholas: Like, you know, basically, those are the three properties, the censorship resistance, the computational integrity, like they're valid transactions and that they're executed off-chain for scaling properties. Are those three, the only three things that are like primary in your perspective of L2s?

Scott Sunarto: I think for most like kind of like users, that's kind of like the mental model that is like, of course, like we can go on, like, you know, like we can make this significantly more complex. We can talk about things like sovereignty, right? Like if, like, let's say like. there are kind of many forms of rollups. There's like kind of like what people call settlement rollups, like optimism and arbitrum is like what people call settlement rollups, where do you kind of like basically have a smart contract on like Ethereum that basically allows you to like, you know, fraud proof and that kind of like decides, so what is the canonical, like, basically fork of the, of the kind of like, of like the rollup and so on and so forth. But there's also things like sovereign rollups, right? That does not settle anywhere. And then that sovereign rollups have different properties, for instance. So there's like a lot of... In

Nicholas: the fraud proof example, the benefit of being on a chain with fraud proofs, on an L2 with fraud proofs is that you can be guaranteed by L1 that if there is any kind of, you know, any kind of foul play on the L2 that you can always resort to L1 in order to interact and make sure that your, I guess, transactions are included or that a fraudulent proof is disproved.

Scott Sunarto: Yeah, so there's like kind of like additional nuances to that depending on whether or not your rollup or like layer two is like a sovereign rollup or is it like settlement rollup, right? Because if it's a settlement rollup, yes, you can like use like basically an escape hatch to like basically rescue your token. With sovereign rollups, since like technically you're only using the layer one as a data availability layer, there's really no like basically layer one you can directly withdraw tokens to. And so as a result, the only way that you can kind of like rescue your tokens in a sovereign rollup is like by kind of like social consensus work. In a sovereign rollup, there is fraud proof too, but the fraud proof is basically submitted to the P2P layer of the rollup. And then like other peers in the chain can like see, OK, there is a fraud proof, and I verify this fraud proof that there is indeed fraud. And therefore, we're kind of performing a social consensus work. And then the bridges would kind of like verify that the fraud proof is correct as well and kind of shift it to the correct chain. And then that will allow you to kind of withdraw the token.

Nicholas: If a fraud proof is, if there is fraud proof, what happens exactly next in something like Optimism or Arbitrum? Like how do we get, how do we land up on the right chain?

Scott Sunarto: Yeah, yeah. So basically the kind of like thing now that happens is that if there is a fraud proof in like a chain like Optimism and Arbitrum, what would happen next is like the chain would essentially roll back to the point where the dispute happened, right? Let's say there is a dispute at the last 10 blocks. And so that means that the rollup would roll back to like. basically roll back the past 10 blocks. And now you went back to the state, to the last state where there is no disagreement on what is the state. transition is. Of course, now there is a problem now. Like what happens with all these transactions that got reversed? And now it becomes kind of these like the bid right now. And I guess this is like one of like the point of discussion and like point of like kind of like research right now in the rollup space in terms of like, how should we handle the transactions that were rolled back? Should we automatically reapply those transactions or should we just kind of let it be like? just kind of just like you will continue business as usual and then they would have to resubmit their transactions manually. And so like that opens a lot of like very interesting kind of worms, right? Because let's say like what if someone uses, let's say like use it for payment and like your transaction got rolled back and like whatever, right? And so that is like a very interesting kind of like point of discussion right now. There's like some implications of it to like MEV and so on and so forth. And like I actually had this conversation very recently at like ECC. I don't really have a strong opinion about it yet, but I think like replaying transactions is definitely much more complex than like what I think most people like thought it would be because of like the second turnover effects of like trying to replay transactions like without like that one specific disputed transactions and like yeah.

Nicholas: Right. I want to keep going through like what the basics are of L2s, but it does make me wonder. one thing that really concerns me is how do you like when you say you can rescue your funds if by forcing inclusion with the trapdoor on L1 or with a fraud proof. I think typically these are like seven day long fraud proofs, right? So that we could be rolled back all the way to something that happened last week. So with all of the interconnected assets, I mean, first of all, when you say you can rescue your funds, that's only if the funds are native to L1, right? And they've been bridged to the L2. But if they're L2 native funds, maybe it's more complicated. You can't retrieve them on it, bring them back into L1 by pulling them out of the vault on L1 because there is no vault on L1. They're native to L2. So does this have anything to say to, I mean, if there is a fraud proof, or is that how am I using that verbiage correctly? Like if fraud is proved, then what happens? I mean, what does it say to the, if you can no longer trust the L2 for some blocks that we're rolling back, I guess we just go back to some point in the past and move forward. It does seem to me like there's some, I mean, if you have assets that are native to one L2 that have been bridged to another L2 via something like Hop, it seems to me like we're having potentially exposed to something like exponential risk as the assets are moving between these L2s that each of any of them could be rolled back, causing like a cascading kind of, I mean, a big problem for someone like Hop, I guess.

Scott Sunarto: Yeah. So that's a very great question. And I think like, as you might have realized, like some of these like bridges that bridges from like L2 to other L2, or even L2 to L1, while skipping the kind of like the challenge period, these bridges are typically economic bridge, right? So like someone is taking the risk of like basically guaranteeing like that, like kind of basically providing liquidity to kind of like guarantee this. like kind of like withdrawals or transfer or bridging, right? And so like, and the way that they're able to do this is basically there's like this interesting property where like, although there's the possibility of like, let's say of someone triggering a fraud proof. However, as long as you have the data, the call data for the rollups, you can use like a full node to replay those transactions yourself and verify that those computations are correct. And like, if you can kind of basically replay your transactions on a full node, and like you can verify that these state transitions are correct, it has the same like Merkle root, then you can basically have, you're going to be pretty confident that these like transactions are not going to be rolled back, right? The only problem here is, of course, that right now we don't have a way for us to generate a proof that this, all of these computations are valid from a full node, because we can't have like an on-chain full node. We can have like an on-chain Litecoin, we can't have like an on-chain full node that completely replays these transactions and provide that, hey, these are like valid state transitions and can move forward in time. And like, this is like where the difference are with like, for instance, with like ZK rollups, right? With ZK rollups, they are basically kind of like making, like at every block, they're generating a proof of the computation, like basically generating off these like state transitions from like, from the block. And then they're submitting that to the L1 and like the L1, like basically verifies that these kind of like zero-knowledge proof of the computation is correct. And by doing that, you can know that these blocks are valid and it will not be rolled back, right? These are, of course, like different with like optimism, where like you can still run a full node to verify that the computation is correct, but there isn't really a way to convince the kind of L1 right now that this blockchain will, that this block will not fall back until the challenge period elapses. And of course, this is kind of like where I think like, if you talk or like saw on Twitter, there is this question of like, basically in the end game of like optimistic rollup, there's this kind of like basically combination of like proof system where like you can use, you know, again, like basically multiple fraud proof from like different like Ethereum clients, but also possibly like integrating like zero-knowledge proof somehow. And like, and kind of let's say, for instance, to like reduce the challenge period by kind of like allowing people to generate a zero-knowledge proof of the fraud instead of like having to replay that fully on chain and like so on and so forth, right? So there is a...

Nicholas: I just want to clarify one thing. How am I able to verify that the transit, if the fraud proof has not completed, how am I, what am I, what computation am I able to do off chain to like run through these transactions? How exactly am I able to be sure that they will, that they are legitimate in advance of the fraud proof clearing on L1? What is that really?

Scott Sunarto: Yeah, yeah. So, so like, of course, this is kind of like basically two different rollups, kind of like proving systems. So like optimistic rollups, there's no way right now that you're able to prove that the block is kind of like valid, right? Like without like actually waiting until the challenge period elapsed. But with a ZK rollup, like layer twos or like ZK rollups, the way that ZK rollup works are basically just by like kind of making it such that block is only considered valid if you submit it to an L1 and while also attaching a proof to it. And like the interesting property of ZK rollup, right? So like when people talk about ZK, people often associate it with privacy, but in the context of ZK rollups, they're using ZK not as a way to like have these like privacy, but more so to basically generate a proof that these computations that is performed in this like, that this off-chain computation that is done in the ZK rollup is indeed valid, right? So like the way that they do it is basically like you kind of have these transaction call data that is fed into the state machine, and then you generate a proof that given the state root of the previous block and the state root of this block, the transaction data when executed results on the basically the kind of like the post-state state root, right? And so that's basically what the ZK rollups does, right? So they don't have, like ZK rollups don't have fraud proof. They only have like these like ZK proofs that basically convince the L1 that this block is invalid. And so like that is kind of a really interesting kind of like, like kind of like really a property of ZK rollup, right? And now instant finality. I wouldn't say the word instant finality, because like it's not instant, right? Because like generating a proof. Yeah.

Nicholas: You have to generate this proof, which can take like what up to an hour or it depends. It's computationally intense.

Scott Sunarto: It's getting faster, right? So like, but like. the key thing here is that these proofs are not, it's not free. It takes computation, computational power. It's definitely more computational power than just like running execution, like in an optimistic rollup. So you would have to run clusters of GPUs to be able to like generate these like ZK proofs. Some people are building kind of custom hardwares to accelerate kind of ZK proofs. Yeah, like ASICs. And so like even using GPUs to kind of like generate these proofs, kind of like hardware, kind of like hard proof. acceleration is like definitely one of the kind of like focus for a lot of ZK rollups to make sure they're able to like, you know, generate these proofs faster or cheaper and like more transaction per second and so on and so forth. But right now, like I would say, like, it is like definitely a very kind of like a trade off. that I don't think a lot of people like to acknowledge, but it is very real.

Nicholas: My friend, Brandon Gomes, who's working on ZK L2 tech as well, mentioned to me at DevCon in Bogota, that the user from the user experience perspective, because of this kind of economic guarantee that you get for, you know, the UX of an optimistic rollup will always be better than a ZK until the proving time is really brought down to nearly instant proving on the GPU like compute side. Because the from the perspective of the user and optimistic rollup, it just processes and it's very rare that there's a rollback. So the experience is so much better on optimistic that even though ZK has hypothetical instant finality or finality as soon as the proof is on chain, you don't really get an advantage from the UX. So as long as I mean, the way I interpreted this is as long as ZK takes any amount of time at all to prove, you will always want some kind of optimistic solution to fill the gap so that it feels instant to the user.

Scott Sunarto: Yep. So I think like the key thing I would also highlight here, if your layer one is Ethereum, like there's no way that you will be able to have instant finality because Ethereum itself doesn't have instant finality, right? With kind of like Casper FFG, you only have finality after like two epochs, right? Which is like around like 15 minutes, if I recall correctly. And so like, that is like the lower bound to finality in any rollups, let it be ZK rollup or optimistic rollups. So like, I think like, like instant finality, I would say is kind of like, kind of like a very confusing kind of like term. And also for kind of the audience here, I would also like want to kind of like, kind of like slightly expand on like what it means to like, what finality actually means. Yeah, please. Because like, sometimes like a lot of layer ones or layer twos, like really, you'd like to use this as like a marketing jargon and like use them inappropriately. But like, basically what finality means is like a point in time where your transaction will, you can know for a fact that will not be reversed. I actually have a blog post about this. So if you go to like my Twitter profile and go to like my website there, I have like only one blog post. And that like one blog post is about like, like kind of like finalities and like kind of like compare, like how different blockchains and different like rollups have like different finalities.

Nicholas: I love the one blog post. That's a flex.

Scott Sunarto: Yeah. So like, but like, yeah, I think like. the key thing here is like, like, so I want to kind of like contrast this to the fact that transactions getting confirmed is not finality, right? Let's say you submit a transaction, let's say on an optimistic layer two, the transaction is not like final until like basically the layer one is finalized. Because if the layer one rolls back, let's say like the layer one, like basically rolls back and like remove the transaction data that the rollup sequence are posted, then like, it's basically as if like that transaction never happened in the first place. And like the rollup would need to also roll back to take that into account, right? So the lower bound of like the role of finality will always be the finality of the layer one. The second part is really on the finality of like, like the proof system itself. With ZK, the like, the basically finality is when you have like, like a transaction and you have like a proof, and then you're able to verify that proof, like kind of on chain. And like with opt-in rollup, this would be the challenge period. However, I would also say that that is like very debatable. Like, I think like Toggle will have like a very different opinion than me here. And he thinks that like, as long as like you have a full node and that full node are able to verify the transaction data that is submitted to the layer one after it's finalized, then you can be confident that your transactions will not be rolled back. However, I think like, I slightly kind of like disagree with that because like, there is like some slight, still some slight risk of like transactions like getting rolled back. Like, and like the only kind of like the most concrete guarantee that things will not be rolled back is only after the challenge period elapsed. Because like from a smart contract perspective, like off the, like the settlement layer of the layer one, that is like the point in time where like it's completely, there's like no way you're going to roll back. But before then, there is like these like chances where like, for instance, like maybe there's like some like, like, you know, like basically kind of the layer, like the layer one, like MIPS or Wasm interpreter have like slight bug or like slight disagreement with like the actual execution layer. And as a result of that, they kind of rolled back. Right. And so these are kind of like. still some, a lot of these like smart contract risks and like a lot of these technical risks still do exist. And I think it's important to kind of like take this into account. However, in kind of a utopic world where we implement perfect software, it is indeed true that like, if you can verify, like if like the transaction call data is like verified on the layer one, and then you can verify that that is like computation is valid on a full node, you can be somewhat confident that these will not roll back like on like on like an optimistic layer two. So like I would kind of caveat that because like a lot of these things, like I think there are kind of very varying different viewpoints. And like even we can't agree on like what is the definition of like a layer two. So I just kind of say that there are a lot of caveats and a lot of perspective here.

Nicholas: So if we did want to get like maybe just the cookie cutter, sorry, cookie clicker L2, the toy L2 that you've built, this may be a good example of like a minimum L2. What are the major architectural pieces in your L2?

Scott Sunarto: Yeah, yeah, yeah. So I would say the biggest kind of like, like, I would say like kind of like. most roll ups are really composed of like, I wouldn't say most, but I think like, the way I think like, at least I think and how I have a mental model of a roll up is that it's composed of like three main components, right? The first one is execution layer. The execution layer is basically the state machine of the roll up that is like what kind of like handles all the user's transactions and like what is the, you know, the current state of the blockchain essentially. The second part is the sequencer, which basically turns the transaction data that comes to the execution layer into a data block that can be posted into the Ethereum layer one or like Ethereum layer one or any other layer one that you choose. And the last and not least is the proof system, right? This could be again, a fraud proof, or this could also be a ZK proof, right? And so like, in our scenario, like I think like what I'm going to do is like basically just write a solidity interpreter, like it was a solidity kind of like, you know, smart contract that interprets this like very, very bare bone, like kind of like state machine of like a roll up. And that would basically allows us to like do very, very kind of like, you know, simple fraud prover that doesn't need like, kind of like MIPS or like Wasm interpreter, it could literally just verify that, hey, given this like transaction call data, and this is like the new the previous state route, and this is the next state route, you're going to easily just kind of hash all these kind of like, you know, the hash, the number of the amount of the cookie click, and then verify that this is indeed the post like, you know, like the post state hash of this state, the state route hash. And that like, just by doing that, you will be able to kind of just like have a fraud proof system that is like fully on chain, it's like single shot, it's like you only need one transaction to submit their fraud proof. And then like, kind of like. have these like very, very minimal like fraud proof system. And I think I was joking about this as well, like, I don't know how much you guys have used like L2B. With L2B, there's like these like, kind of like, I think like the elected, I think the way that reverse like roll up stages. So like, I think like stage zero is that you don't have like, kind of fraud proof. And I think like stage one, they have something stage two days, they have something. And I was joking that like, if like, I kind of like basically launched this domain that I could basically be one of the few stage two roll ups on L2B. Because there's not a lot of them. Like if you even if you see like Average German or like OPE, like, like OPE is state zero and Average German is like stage one, there's only like three. stage two, like, roll ups out there. And like, yeah, basically, if I like kind of like basically deploy this domain that I would really want to try to convince the L2B people to like, basically list my roll up as like, like a stage two roll up. Sure, it will not have any like TVL onto it. But it would be like, like a stage two roll up, like by definition, essentially, like, like, yeah, I think that was kind of like really kind of the kind of like the me me motivation to like finishing the project. But like, yeah, that's like, that's kind of like my aspiration for next kind of like month to just get into that stage.

Scott Sunarto: You can't even have tokens on the cookie clicker roll up. It doesn't allow you to like deposit a token and like mint it on like a layer two. There's like no way you can do that.

Nicholas: But you have cookies. So there is like something that's if not transferable, there is something fungible.

Scott Sunarto: So there's like, the only thing that the cookie clicker roll up does is increment a number by one on like a roll up.

Nicholas: Okay, I guess.

Scott Sunarto: Yeah, so like it literally just like a single cookie, everyone can click the cookie, and the cookie can go up by one like and dislike. So it's like literally just like does one thing. It's like, it's like a very, very dumb roll up. And like, it's just kind of like. it's like it's kind of like, as simple as possible, such that you can't do much with it.

Nicholas: But people who click are different EOAs. What's that? Are the entities that click are EOAs. So you could potentially like wrap them. Like you could, if you just do it more like an ordinals gesture, like you say, if I clicked to get you know, that's a point and then we can trade those points on some other chain. You know what I'm saying?

Scott Sunarto: Yeah, that's like, that could potentially be like, something like you can do like, a feature you can add to like the settlement contract where you can like basically, you can basically bridge out like, what is the, what is the, what is the points of like, cookie click? It could be like a funny thing. Yeah, that's like a good idea. I'll kind of put it in my bucket list.

Nicholas: Right, you could be the most advanced GameFi L2 out there. The most legitimate on a technical level. I love it.

Scott Sunarto: You know, like the first gaming L2 to be a type two roll up.

Nicholas: Exactly. I mean, think of the valuation, Scott.

Scott Sunarto: Yeah, yeah.

Nicholas: So you mentioned execution, layer, sequencer, and is the third fraud proof is like a fraud proof mechanism on whatever you're I would say like proof system.

Scott Sunarto: Yes, I want to be inclusive of the ZK roll ups here. I don't want to like, kind of like rebrand, like layer twos and roll ups to only be like optimistic roll ups. So like, I don't know, like, what is the kind of catch all word phrase for like, kind of like for the proof thing. But like, I think some people refer to the proof system. But like, yeah, I think the proof system can either be like optimistic, optimistic, like, you know, like fraud prover, or in ZK roll up case, it would just be like, like ZK proofs.

Nicholas: I just want to go a little bit deeper onto that fraud system. So essentially, when you say that, like, you, you need to be able to read the call, or do you need to be able to read the call data in order to verify that the proof is legitimate based on the prior confirmed proof? Or is it just with the proofs alone? Or you depend on that call data that's written to L1 in a typical circumstance? Of course, there's other scenarios, but yeah, you basically run through the execution, right? But within this, within the context of a smart contract on L1, in the kinds of examples we're talking about?

Scott Sunarto: Yep, yeah, yeah, that's like, correct. So basically, this is like, why data availability is such an important thing, right? It's because like, imagine, imagine like a world where we're trying to build like roll ups, like we're trying to build layer twos, but we don't have data availability, what could happen here, right? Potentially, like an actor could just like, like prevent you from accessing the transaction data that the roll up like, like that the roll up has in like a certain block. And then if that's the case, then you wouldn't be able to replay the transaction and to check whether or not there are fraud in the first place, right? And so this is like, why data availability is like super important, right? And so like, even as like, we are like, it's kind of like, you know, the Ethereum core devs are designing like EIP 4844, like, like, like, the call data expires after like, I think, like two weeks. And the reason why like, you know, two weeks are enough is because fraud proofs are like, basically only like for like, let's say for a week, right? So you're not, you don't really need the data to stay there. You need the data to stay there. Like, you know, during this period of like a week where there might be a challenge period such that when a fraud proof gets triggered, you are guaranteed to be able to access transaction data. And using that transaction data, you can replay the transaction and then point to the part where there's like, okay, given this transaction, it should have resulted to this, like, you know, Merkle state route, but like what you submitted wasn't the correct Merkle state route. And so that's like where you'll be able to kind of like generate a fraud proof and then like convince the chain to roll back kind of like the state. And so like, yeah, this is like, this is like a very key part. You do need data availability. You cannot build a rollup without like, kind of like, you know, some sort of data availability guarantees. Because at that point, then people, the like, you have like a risk of like, the party of just withholding transaction data, and then like you can't fraud proof at all.

Nicholas: And that's true for ZK rollups as well. I mean, the data availability is still important, but it's not exactly the same.

Scott Sunarto: There are kind of like ways to like, kind of like, like, basically, they have like these like a very interesting property, like with like, I think like they call like Validium. And so like in Validium, they would use something called like a data availability committee. So with data availability committees, you would basically have like a committee of like operators, so not the E3ML one. So let's say it could be like a consortium of like operators. I think like in like Starkware, kind of like ecosystem, they would have like these like data availability committee members, Arbitrum, have Arbitrum antitrust also have like data availability committees as well. And so these are kind of like an alternative options where like, if you are willing to make some minor trust assumption, you're able to kind of basically, let's say just like share the responsibility of like, like kind of like storing this data and making that data available to a kind of these like consortium of like operators. And as a result of that, you don't need to post all the call data to E3ML one. And then like, you can basically just like when needed, you would access those transaction data from the consortium. And then like, you know, basically, like kind of like, kind of like, you know, basically, you can verify these like, like signatures from the consortium members on chain, like on layer one as well. So like, these are kind of like the construction that will allow you to kind of like conserve, like the cost of posting call data to a layer one. But like, either way, you definitely still need some sort of like, like data availability guarantees. The only question here is like, who is providing the data availability guarantees, right? And so this is like why there is right now, like a lot of people building alternative, like DA layers, right? So of course, the main kind of like, you know, way that we are doing. data availability right now is to E3ML one. Right now, we don't have 4844, but like, once like we have 4844, like, you know, call transaction data costs is going to like, you know, be able to kind of like, the call data costs are able to kind of go down. But even then, like, there is still concerns of like how, like, you know, like that cost being not low enough, there might not be enough like data availability throughput, and so on and so forth. So there are things like Celestia, who's like working on like an L1 that is dedicated only for data availability. There's also Eigen DA that is using like Ethereum restaking to build kind of like a kind of like an alternate DA layer that is secured. We're using kind of the kind of like Ethereum stakes. And so like, there's kind of these like different approaches. And then there's, of course, the data availability committee, which are basically kind of these like consortium DA layer.

Nicholas: And like, right, so right now, when roll up posts, like when Optimism or Arbitrum or the cookie clicker, post proof to the chain, a fraud proof to the chain, they're also passing call dates, they're storing the fraud proof, and or like the the Merkle route, I guess. And then they're passing as call data, all of the transaction data that happened on their chain in the blocks that are included in that proof, just as call data. And then the smart contract, you would have. you would have access to the data by going and reading the events on that contract for a period of a little, I guess you would have it forever if you go to an archive node. But you're doing that off chain and then finding that the proof doesn't correspond. And but it's still in the it is in the context of the smart contract, in memory during execution that you're able to grab the call data and prove that it's that there has been a fraud committed, right? Like, I'm a little confused about that into the current system, where it's all on call data. You do actually have access to the call data for two weeks. I was I was under the impression that you don't have access to it at all.

Scott Sunarto: Yeah, so I think like to clarify here, like, in the interior, my one, your store, you're, you're, you're not storing it as like, kind of like a call data, you're storing it to like a contract storage. And it stays there forever, because like right now with the layer two, also, sorry, with the layer one, before we have 4844, we don't really have like, these like, you know, data block expiry. But like, once we have 4844, we have, we do have like state expiry for these like, layer two transaction data. And as a result, as a result of that, like we can have like cheaper costs, because this is data forever, and so on and so forth. However, when I say like call data, these are call data off the layer two. So it's not, it's not we're storing the transaction data as like a call data in layer one, it's more so that like, when you're calling a function, or you're making a transaction to call a function in the layer two, you basically extract the call data, you extract like the function signatures, you're calling, you're extracting, like, what are the arguments that you're passing to the functions? And like, what are your signatures? And like, you know, like, like, we're making this call, you can play you take that data, and like, maybe like some wolves want to do like compression, and then like you compress that, and then you can push post that to like the layer one. And like now to like, like the layer one smart contract, the fraud prover smart contract, or like, maybe they kind of like decouple it into like a separate thing, half that like transaction data of the layer two, stored on to like, kind of the layer one, right. And so that's the current approach of doing it. So like, it's basically the same way as you would like do like any normal, kind of like, you know, basically like long term storage of data in like a layer one. However, with 4844, the idea here is to kind of like introduce a new way for people to store like temporary data. And this is like kind of like, kind of like intended to be used to store these like, kind of like layer two call data on to the L1. So like, yeah, this is kind of what we're trying to find out.

Nicholas: Yeah, create a 4844 creates it sort of creates a special case for storing this data. that's particularly well suited to, I guess, optimistic rollups in particular.

Scott Sunarto: Like, I think like optimistic rollups, like ZK rollups also like store like, like transaction, like, you know, call data on to the layer one as well. There is a specific instruction of ZK rollups that does not do that and only store the state route and like basically ZK proof, but like, and like store the remaining transaction data in like a DAC or DAC, however you want to call it. But like, like, I think like, I'm not sure like how, which like, which ZK rollups does what, but like, again, I think in most cases, like rollups does store all their transaction. kind of like call data in kind of Ethereum layer one. I do think like, I think Starkware does not store transaction data, all of them in like layer one, because they, I think like ZK rollups have this like compression property that allows us to like, that allows them to like, basically kind of like reduce the cost of transactions as like number of transaction grows, like, you know, like within kind of like within, within a block. And so like, again, like. there are kind of these like very kind of like, like properties of the ZK kind of like, like rollup system. But again, like a lot of these are, are pretty bleeding edge. So like, there's a lot of kind of, you know, research that is still being done with regards to like proof systems and like on the ZK rollup side of things. And this is like kind of like. why particularly I'm like, pretty excited about like optimistic rollups, because like, like optimistic rollup now is in a stage where I think like we're approaching kind of like engineering maturity and kind of like where a lot of these comes down to. like hardening and kind of like making sure like things are working as it's intended to be. There's no kind of like big, I mean, like there are still some research questions, but it's like, you're no kind of like completely, we don't know how to do this. questions. Other than like, I think like share sequencer or actually something like that, but like, that's like a separate case. Right. But like, yeah, I think like, it's like comes down now just like engineering while in ZK rollups, I think like, these are one of the fields where there are kind of these continuous research into like, how can we build like better? kind of like hardware to do like hardware exploration and like proving, speeding up proving times. Like. how can we do like, like, you know, maybe like explore new proof systems, right. Like, you know, like, you know, like some people want to do like, I don't know, like kind of like Stark and there's a kind of like different approaches with Starks as well. And like, you know, these are kind of like ongoing kind of like, kind of like, you know, again, discussions and like, you can see this different, like, kind of like, you know, ZK rollups, like they would just like argue and kind of like how your ZK rollup is like not ZK rollup and like whatsoever. So it's like, it's kind of like, kind of like a very continued, it's like, it's still like a very much like more kind of like earlier stage kind of like in the maturity. And I think we'll kind of continue to see them evolve. And, and like, in the meantime, I think like optimistic rollups are actually, like, I didn't like much, I'm going to get cancelled for this opinions at some point, but I feel like, but I think like optimistic rollups are now like much more mature like ZK rollups.

Nicholas: What other cancelable opinions do you have? Any strong opinions on this, this area?

Scott Sunarto: Yeah, I think like, I think like, I guess like this is not really kind of like a cancelable opinion. I think this is like kind of more a call to action for like more transparency. One of the biggest problems with ZK rollups right now is that a lot of them are not very transparent about like their proving, like their proving system, like either like they don't open source their proving system, or they're unwilling to publish their proving benchmark. A lot of people like to make a claim that okay, like, proving is like paralyzable, and therefore it's not a bottleneck, etc, etc. But like, the reality here is like, there is like economics consideration of like, when does like kind of these like ZK computation are actually, what are these? like ZK proves actually costing, like, kind of users, right? And like, I think the reality here is like. many ZK rollups will just kind of try to kind of like, brush it off that like, okay, like, we'll just like subsidize the ZK proving costs running our on like our Google Cloud or data center cluster, and just like not talk about it. But in reality, these are kind of the conversations that we really need to be having, right? So that people know, where are we actually in the process of building mature ZK rollups, because like, there are going to be like ZK rollups out there, there's not even like, kind of like close to being able to generate like, basically proves within like, reasonable time with reasonable amount of like, hardware and like cloud cost spend, right? And then that's like, basically a huge problem. And like, every time I would just kind of like, I'll be on Twitter, and I kind of like have someone like argue at me about ZK rollups, I would just kind of ask them for like a proving benchmark. And they would just like, not give me a proving benchmark. And like, that's like, kind of like kind of like one of the kind of like the most annoying thing that I feel like, kind of like, we just don't need to be like playing that like game, you know, like, I feel like we could have been just like more transparent about like, where we actually are.

Nicholas: Because what you're saying is, if the proving cost is very high, it's centralized, is what you're saying. If they were to stop subsidizing that, then it would know.

Scott Sunarto: I don't think I don't actually think it's like a discussion about like centralized or not. I think this is more even to a discussion of like, what is practical about like, you know, transaction throughput, for instance, right? Because like, let's say like a ZK rollup claims that, okay, like, we are able to infinitely scale into like Web2 scale and whatsoever. But like generating a proof for like, let's say Uniswap trade cost them like, you know, X amount of money, then like, it's not infinitely scalable. And because like, you would be like burning a shit ton of money to like basically generate a ZK proof, or maybe you would have to wait for a long, long time for the ZK proof to like even arrive, right? To like, and then like, that would kind of, then that would delay finality. And there's a question of like congestion, right? Let's say if you're using like, let's say your ZK rollups are faster because you're using ASICs. But like, let's say you have congestions. And then like, during the congestion, you cannot just immediately spawn new more ASICs out of thin air, right? You can't immediately just like spawn GPUs out of thin air, or like, I mean, sure, we have like cloud providers that have a lot of GPUs and whatsoever. But like, there are kind of limits to like how much you can like basically like scale ASICs or like hardware availability and whatsoever. So like, like hardware exploration and actually improving, improving system is like a very kind of like worthwhile and like a very important research. And these are not things that we should just kind of brush off and like, and then just kind of like assume that these things are already solved because it is not solved. These are things that we should be talking about more. And like, we should encourage people to actually like think about this more as well, right? Instead of just like, like kind of launching them and like whatsoever. And this is also important because like, it also helps us understand about like, why are some ZK rollups making trade-off where they are less EVM equivalent than like some of them who are trying to be more EVM equivalent, right? So for instance, like ZK-Sync or like ZK-Sync is like less EVM equivalent than ZK-Rollups. There's a lot of issues with like, kind of like ZK-Sync, EVM compatibility, like. you need to use a custom like Solidity compiler to work with ZK-Sync and so on and so forth because they make these trade-offs, right? Like ZK-Sync was like one of the first mover to like ZK-VMs. And so they make these like very explicit trade-offs to be able to make like, you know, ZK-Rollups work realistically with a kind of like a proving time that is good enough and so on and so forth. But as a result of that, they kind of like make this trade-off with like, like, of course, like, you know, like, you know, like Solidity compiler.

Nicholas: I want to ask you about EVM equivalence in a second, but before that, I wonder, do you think it would be useful to walk through if I want to click on the cookie clicker rollup, just what happens step by step? I feel like we might learn something going through that.

Scott Sunarto: Cool. Yeah, for sure. So basically the way that it would work. is that like the execution layer, like, so like, I'm just going to say it from get-go, we don't have decentralized sequencers. Like. I'm not going to like. go that far in terms of like. trying to like. do you go with this cookie clicker, but like, yeah, so the execution layer is basically, you can think about the execution layer as much like a backend server, right? So the execution layer's is to basically, basically become like kind of like an endpoint that the user can like submit transaction to. And then like, after they receive, so like, so like a user would submit a transaction to the execution layer, the execution layer would then like verify the signature of that transaction, like make sure that this is like, you know, okay, the user, like, let's say this like, you know, signature does correspond to this like user and et cetera, et cetera. And then let's say, okay, this like message is like incrementing the cookie by one, right? So after the, you know, the signature is like verified, the state, the execution layer would increment the state of like, you know, the blockchain, the cookie state, increment that number by one. And then like, after it's incremented, now we would have to rehash the state of the blockchain, right? And so like, typically a blockchain would have like a Merkle tree and like, you know, like each like node would be different accounts and whatsoever. And you have to Merkle and all of that, because like, technically you only have a single account where we only have like a single cookie, like you only need to hash, like the number, the integer of like the cookie or like, you know, like on whatever, right? So like, you can just basically kind of like, you know, like kind of just like basically hash the integer of the cookie. And now you have like a, basically a state route for, for the execution layer. Now, like, that's basically kind of like, so that's the kind of one thing that we'll kind of like. keep in mind, right? Now we have the state route of the, of the cookie, like clicker, execution layer. And we also have the transaction that came from the user, right?

Nicholas: So this would be equivalent to like one block in an optimistic rollup.

Scott Sunarto: Yes, that is correct.

Nicholas: Okay, one L2 block, just to be clear.

Scott Sunarto: Yeah, yeah, yeah. One L2 block. So like, so like, now that we have both of these informations, the execution layer needs to get this to the layer one, but the execution layer doesn't know how to talk with the layer one. And so this is like where the sequencer comes into play. The execution layer would basically submit the state route and the transaction data to the sequencer, and the sequencer would be responsible in posting that to the layer one. And so like in the layer one, you would have like a smart contract that basically, it basically have a function that kind of like takes in the state route and transaction data and store that, right? And then like, now you have all of these things like stored in the, in kind of the layer one.

Nicholas: Yeah. Before we move on, I've heard it said that centralized sequencing is not such a big problem because of fraud proofs. Is that, does that make sense?

Scott Sunarto: This like, this is like very, very kind of like, controversial. I am on the camp that centralized sequencers are honestly not that bad. But like, yeah, I think like, it really depends on who you're asking. Because like, the reason why sometimes people want to have decentralized sequencers is like, is like, for instance, like, because you want to have like, kind of like, basically as much of a real time censorship resistance as possible. Like, let's say like, for instance, like, you have like Aave or you have Compound on let's say Optimism and Arbitrum, right? Right now Optimism and Arbitrum doesn't have a decentralized sequencer.

Nicholas: And so like, let's say if the, or like, or like- There's one process in the world that does the sequencing for each of those chains. There's one process on some cloud somewhere. Yes, yes.

Scott Sunarto: Those sequencers decided that they do not want to process your transactions. They no longer want to kind of like, process your transactions. Or like, maybe let's say they don't want to process anyone's transaction at all, right? In this case with Aave and Compound, like, what if we need to do liquidation? What if like, like during that period where they want to stop processing your transactions, like we need to do a liquidation. Sure, right now there is a way to use the escape hatch to let's say like, you know, force a transaction, force include, or like, or this trap door to force include a transaction into the layer two.

Nicholas: That is to go directly to the L1 contract and talk to the L1 contract.

Scott Sunarto: Yeah, yeah, yeah. But that also takes time. It's not instant, right? It takes, there's like a delay in terms of like, when can you like, you know, submit this to the layer one and it would appear to the layer two, right? So there are kind of these gaps of period where like, you don't have like real time, like, you know, censorship resistance, right? And so to a lot of people, like, you know, like they would flag this as a problem, right? Because like, what if like, like, you know, like these like failure to liquidate causes these DeFi protocols to be insolvent, right? It now kind of felt like there was kind of an attack of like these like DeFi protocols because of these like censorship aspect and that therefore, since real time censorship resistance is an important kind of like attribute of like, you know, these kind of like construction. And so that is kind of like where...

Nicholas: A detailed question on that. So when you use the trapdoor on L1 to force inclusion, it will force inclusion of a transaction on the L2 that will need to be included in the subsequent proof. Like the L2 will have to include that transaction. When does it, well, correct me if I'm wrong on that, but also another question, where is it in the sequence? The like, does it happen then, before the next block starts at the end of the most recent proved blocks? Or like, for example, let's say frentech, we're operating on an L2, or sorry, it is operating on base. So let's say you want to force the inclusion of a trend, you're being censored by the base RPC, for whatever reason, their centralized sequencer, you want to liquidate your position in Scott keys, let's say. If I do that, and I first, the first barrier seems to be that if the amount that it costs you to access the trapdoor on L1, for example, if gas is very expensive, or if the function is expensive to call, then you need to be losing more money on L2 than you're spending to force inclusion on L1. So that's one barrier. But the second would be, where in the sequence does it happen? Let's say everyone is liquidating their Scott keys, because you said some very incendiary thing, or no, you've been saying two lame things on Twitter, and it's not exciting enough anymore. If everybody is selling the keys on base, and I force inclusion, where will my transaction show up in the base chain?

Scott Sunarto: So to answer that, I so like, I think like, these are kind of like the part where I would say these are, like, these are kind of like implementation details. But like, I think the high level answer is like, most often than not, it's going to be. and like, this is like how I would implement. it is basically based on the order of the call data that is stored in the L1, right? So these are kind of like sequencer, like one thing that I didn't mention about like, one of the roles of sequencer is determining the ordering of like, where, how are these transaction data ordered, right? Because like, the way that the transaction data is ordered, is relevant to like the roll up state, right? Because let's say I have two different unit swap trades, if I perform one trade, like, let's say I perform trade A first, and then trade B, and then trade B, and then like, and then contrast that to like trade B first, and then trade A, the resulting state might be different, right? So like, it's important for the sequencer to also basically, like, as the name implies, like, provide the right sequence, like, you know, sequence the transaction the way, post that into the layer one. And let's say someone wants to replay the transactions, they would go like, you know, in like, in cardinal, like, you know, basically, like, oh, basically in kind of these like ordinal steps where like, you go from the first transaction, second transaction, so on and so forth, right? So intuitively from there, like, you would want to basically, if you'll use, like, let's say it's like something like a trapdoor, it would basically be ordered based on like, when was like, like, when which, like, you know, last, you know, like transaction data was supposed to delay at one.

Nicholas: So let's say like, let's, let's say the price of your key is one ETH in the most recent confirmed L2 proof on on L1. And then people are starting to liquidate. And they're they're selling their keys, they're buying keys, and the price is moving around on L2. These transactions are sitting in the sequencer, but have not yet been written to L1. I guess how frequently are they written? But more importantly, the real question I have is, if I force inclusion, and it changes the state of the bonding curve on L2, such that it invalidates a bunch of transactions that are still sitting in the sequencer, and haven't yet been written to L1, they have to essentially roll back the L2. It's not not rolling back anything. that's on L1 yet. But all of the trades don't make any sense anymore, because the key that someone bought maybe was more expensive than the amount of ETH they spent on it, thanks to the trapdoor transaction.

Scott Sunarto: That's a good point. I feel like intuitively, this is why I think a lot of the existing rollups, they would kind of treat kind of like... So I think the thing that I would caveat here is just transactions that are already processed in the L2 are typically what people call "soft confirmed". And so these are kind of basically the transactions that the sequencer already saw, but it hasn't reached L1. It's basically soft confirmed transactions. And so let's say you have a list of some soft confirmed transactions, and then you submit a transaction on the L1, and it kind of showed up on this trapdoor. The sequencer can read from the L1 smart contract and say, "Okay, there is a transaction on our trapdoor. It's probably a good idea for us to read from that.". And so these are the parts where I think there is some different implementation details with different rollups. But I think if there are things that are already soft confirmed, you would basically kind of... I think it would have priority. But again, these are kind of two things that are not communicating to each other. So I guess this is kind of where there's some slight hiccups that might kind of be caused.

Nicholas: I mean, it would be crazy. If you imagine this in reality for FriendTech, for example, it would be... I mean, I'm sure the UI does not have a way to handle, "Oh, actually, the key you bought didn't... That transaction that we thought was confirmed is not confirmed.".

Scott Sunarto: I think this is kind of what would be the case for basically re-orcs in the Layer 1. So this is not necessarily kind of behaviors that we've never seen before in a blockchain, because re-orcs happen all the time. And I'm not really particularly sure about how optimism and arbitrary mishandling it. But I think it is very possible if, let's say, if these things are actually improved and they're able to kind of just basically drop this soft confirmed transaction. But I think that would be a pretty bad UX approach, because that would mean someone could eDOS the Layer 2 by kind of force-including stuff from the Layer. So I don't think it would be a good design, right?

Nicholas: So perhaps these L2 UXs, we should not really trust just because the Privy thing says that transaction is confirmed. It's really just soft confirmed. It's really not. It's not like an L1 transaction that's been included and would require going back to the whole chain.

Scott Sunarto: Yeah, the transactions in the Layer 2 are not final. I think in general, I would say the kind of point where you can be somewhat confident that these transactions are not going to be screwed over is the moment that the Layer 1 finalize. Because even let's say it reaches the Layer 1, and then let's say for whatever reason, Ethereum reorgs, like those transactions are gone. And like the Layer 2 would have to roll back as well.

Nicholas: And you could imagine a sophisticated attack doing multiple of these attacks at the same time. Also, like rolling, trying to do, you know, trying to make it so that the, although I guess, even if there is a reorg on L1, most likely the reorg would have to also not have the proofwriting transaction from the sequencer in the chain at all for it to really, but it could be interesting to try to engineer a reorg that removes the, let's say, Arbitrum proof transaction while simultaneously doing trapdoor attack and attacking the sequencer with some kind of DDoS, you could potentially do some damage.

Scott Sunarto: Yeah, so I think these are kind of like, I think all of these are like very possible. And I think like, especially considering that most L2s would have like some sort of like, they would kind of like sequence to a chain or like submit data bundles, like pretty much almost like at every block. So it's like very likely that even like rollback in one block would risk up like, you know, Layer 2. Also, I think I want to clarify like one thing here is that like, if you're talking about optimistic rollups, like Arbitrum and Optimism, we actually don't see proofs at every block. Like you're just like, so like with Optimism and Arbitrum, you're only posting transaction data bundles and like state routes at every block. And then like the fraud proof would only be submitted if there is fraud. And like, so like you just kind of like. finally clarify there. So like only ZK proof provides a proof at every block. And for like Optimist name only assumes that all the documentation is correct unless proven otherwise.

Scott Sunarto: Yep. Yep. So like basically like, yeah, like that would basically be kind of like an entire user flow from like the transaction being submitted to the execution layer to it getting to the layer one. Now this comes through kind of like the interesting part of like how like, let's say there is fraud, right? Let's say for whatever reason, the rollup suddenly says that there is, we need to increment the cookie by like, let's say from one to like a thousand, right? But there's no other user transaction that convinces like, you know, that would be able to convince us that these should be incremented by a thousand, right? And then like a third party would be able to kind of replay these like transaction data and like check that, hey, this doesn't really make sense here. Why are we incrementing the number by a thousand? And so this is like, okay, like there's something sus going on. We need to kind of trigger a fraud proof process. And so they would kind of like call a function in the smart contract in the layer one. And then like at that point, like you would basically have like, you would have basically have like the smart contract replayed that specific block where they disagree and where like, hey, this block shouldn't have been incremented. There's no transactions. Like, why are you like incrementing the number by a thousand? Also, there's like no transactions, right? And so like, and then like, hmm, let me check. The smart contract will like basically check that, hey, what are the transaction data is at this block? And then like, it would see that in the storage, there's no transaction in this block, right? And then like, okay, there's no transaction in this block, which definitely shouldn't have been incremented by a thousand. Therefore, we are basically rolling back this block because there is fraud in that, right? And so that's basically kind of like how the fraud proof works. because this is like so simple. Like the only thing you have to do is like basically like kind of, you know, like, you know, verify that the signature, if there are transactions, there are any transactions on the layer one smart contract, you would have to also like verify that the signatures are correct. You verify that like, you know, like kind of like, you know, like easily or indeed a message to increment the number by one. And then like, based on that, then you can kind of like, okay, let's say you have a thousand increment like messages, then like, okay, if you want to increment thousand by a thousand, then you need a thousand of these like valid kind of like, you know, increment number by one signatures, right? And so that's like basically kind of like how you would do it. You just basically replay that. So there are, there's like a slight bit of hiccup here, right? The question now here is like, what happens if someone make a million transaction on the layer two to increment like the cookie clipper number by a million? We'll reach a problem here where like the smart contract, like the smart contract on the layer one that let's say one needs to replay that transaction will not have like, will basically exhaust the gap, the gas limit of like trying to replay that transaction. Right. And so this is like why, like, it's important to also think about gas limits of like the layer two to make sure that like, there's no blocks that are too large to be replayed in the layer one to an extent that like, you know, you would, your fraud proof would break, right? Because that would be an attack onto its own. This is like also why with like most modern layer twos, like Optimism and Arbitrum, that is significantly more complex than the cookie clicker rollup. They don't do this within a single transaction. This is like where they would have something called the interactive verification game that basically breaks up these like fraud proof process into like an iterative process to like narrow down a specific part of computation where they disagree with and only kind of be only like only replay that part on chain because like otherwise the computation they would need to go through is too large to be done in a single transaction. And you definitely don't want the case for like, oh yeah, you can't even fraud proof a block with a single transaction because like computation is way too complex. And so those are why like modern like smart contract rollups significantly more complex than cookie clicker rollup because they have to take into account these like, like, you know, complex nuances that they inherit by having a very complex state machine.

Nicholas: That's fascinating. You mentioned also single shot fraud proving. I don't know if you wanted to expand on that a little bit for people who aren't so familiar.

Scott Sunarto: Oh yeah, yeah, for sure. So like the single shot fraud proving is the exact opposite of like what optimism and arbitrum does with the interactive verification game, right? So with like the interactive verification game, it's like multi-shot, right? In the sense that like you trigger a fraud proof, then you need to submit multiple transactions or multiple shots to the smart contracts to be able to complete the fraud proof process. So it's like, this is like what I mean, like multi-step, multi-shot, however you want to call it. But with the cookie clicker rollup, because like, it's like, you know, the computations are so simple. You can like also create like, you know, the limit to like, you know, the layer. Like, so for instance, like my example with the cookie clicker rollup, right? Each transaction is like a block onto its own. So like when you replay a block, you're only replaying a single transaction and you can be fairly confident that like, you know, you're not going to want to guess to replay that one transaction, right? And this is like why you can do it within the single shot. So you really only need one transaction to the layer one to conduct a fraud proof. And that's like what I mean by single shot, right? And this is like, so this is like really interesting because like single shot fraud proving is actually a very convenient property to have. Because like one of the reasons why we're not able to reduce the challenge period of like, you know, the fraud proofs is because we need to take into account the possibility of someone trying to DDoS the layer one. So let's say there's like a very motivated actor that has a lot of money. They can burn a lot of gas to fully congest the layer one, such that like the interactive verification game could never be completed, right? If it's a single shot, then it's like much easier for you to like try to force through, you know, that single shot to fit in like a small crack, you know, during this like high congestion. But in a period where there's like a lot of continuous high congestion and there's like these DDoS attack and all that, trying to complete like a interactive verification game where it might be like multiple transactions back to back that you need to continue to submit within like a very tight period of time, that might be challenging, right? That's why we don't have like, let's say, I don't know, like one minute, like, you know, like challenge period, right? Because like, what if I can just like spam the chain for like one minute and it's impossible to complete this like interactive verification game within that one minute, right? And so like...

Nicholas: Is single shot finality or sorry, single shot fraud proving viable for a more complex rollup? or maybe in the ZK context? Or is it really only possible for cookie clicker?

Scott Sunarto: I think it's, I think it's like, so like, so with like ZK rollups, like it's like ZK rollups are single shot because like, I mean, like, I wouldn't use the term single shot with ZK rollups, but like with ZK rollups, when you're submitting the ZK proof, it's like verified within the single transaction, right? So you're not necessarily having to like, like separate it into multiple like proofs. Actually, like, this is like the kind of interesting part where let's say like, what if you have like a lot of like, you know, a lot of ZK rollups, like you, let's say you do fractal scaling, right? It's like, you have like layer trees and whatsoever, and then you kind of also need to verify those computations and so on and so forth. And then you can use like, kind of like, you know, basically recursive proofs to like aggregate that into a single proof that you only have to verify on chain. So like, you can aggregate like, you know, multiple, like kind of these like layer trees into a single proof that you verify on chain. And that will kind of like, kind of like, you know, that's kind of like sidetracked, but like, that's kind of like a very interesting kind of like filled up explorations as well.

Nicholas: Yeah.

Scott Sunarto: So like, I think like the part with like, so like with the single shot, it's definitely possible to like, for like some like, like kind of like rollups that is more complicated, you do it. But they would have to make sacrifices, right?

Nicholas: So maybe for some app chains, it may make sense to have you, it may be possible to have that. or for ZK, there's an. in another sense, you get this kind of single shot feature. Yeah.

Scott Sunarto: Even with EVM, right? It's like, I think, I think like. at some point, the Optimism team was convinced that they're able to do single shot prod grouping. But the way that they're able to do that was like actually quite similar to the way Cookie Clicker, the way that Cookie Clicker does it, right? So at some point, I don't know if you remember this, but then Optimism, there was a point where each transaction in Optimism is a single block.

Nicholas: Yes, I do remember.

Scott Sunarto: For the exact reason that they want to be able to do fraud proving on a single block, or like on a single transaction out of a single block. And so this was, I think like, again, this has been a while, but like, I also wasn't in the Optimism team. So I don't know their internal discussion, but like from a quick observation, this makes sense. if you want to do single shot prod proving, because you don't want to have, let's say, a block with like, let's say, 20 transactions, then you cannot fraud proof 20 transactions in a single shot, right? But you probably can do it if it's only like a single kind of like, you know, transaction, right? A single block. However, it opens up a problem in the EVM, because that's not how the EVM works, right? It's not like when we're dealing with a lot of these toolings with Ethereum, like Ethereum JS, the various indexers and whatsoever, they don't really feel comfortable, like, you know, working with this like very odd, like, you know, kind of like single block, single transaction kind of paradigm. And also that introduced like significant performance overhead as well, because now at every single transaction, every single transaction, which is like one block, you would have to re-mercalize it. And that's like one of this kind of like, overhead of like, you know, blockchains is like basically kind of okay, you have to re-mercalize things, and then you have to like produce new block headers and whatsoever, and so on and so forth. So the transition to like, like bedrock, where like, you would have like multiple transactions in a single block, is I think it's a step in the right direction, because it's more similar to Ethereum layer one, it's more compatible with existing toolings, developers don't have to kind of like, learn about these put guns, it is also more performant, because like you kind of like have less overhead from like trying to create a new block for every transaction. But however, you're kind of making a trade off here that now it's like basically impossible for you to do a single shot fraud proving for these kind of roll ups. So like, that's kind of like my answer there. Another thing here that is interesting is, is that like, if you don't use the Ethereum layer one as the settlement layer, you could potentially do a single shot fraud proving for more complex roll ups. So like, one possible inspiration here, I'm just going to throw it out there, is that let's say you build, let's say you build an OB stack chain that has a custom precompile, like, wait, actually, let's not do an OB stack chain, because like this will open other account forms. Let's say it's like a like a new layer one, that has like, like, let's say that that new layer one has like a precompile or like some sort of like feature on a chain that allows you to replay like a roll up block. Now, like, because you have full control over your own L1, you can like configure it such that they're like, you know, like, you know, there is no gas limit for like replaying like, you know, the like kind of the roll up transaction or like. however you want to configure it and you don't have to run it on top of Solidity. You're going to run it natively, let's say, on the on like the protocol level. So it's like more optimized. and what, yada, yada, yada. And in that process, you can like basically do single shot property. But like, like in this scenario, you would need a very specialized settlement layer to basically be able to like to do it. Right. But like, that's not really kind of like what most people in Ethereum wants. People want to stick with layer one, the kind of Digium layer one to do settlement, because that's like where you're able to inherit the security of Ethereum. Right. And so like, that's like where like, like, again, like it is possible as kind of like an interesting possible point of exploration, but you definitely lose on the ability to do that on Ethereum, because in Ethereum, like you are, you're kind of like constrained by what the EVM, layer one EVM are able to do. And like, yeah.

Nicholas: Right. So this brings us to the question of EVM equivalence. The original roll up design by Barry Whitehat was actually a ZK roll up. But over time, and especially in the last six months, a year, EVM equivalence has really become the most important thing for a successful, optimistic roll up at the very least. Vitalik has this blog post, and there's a graph that I'll include in the show notes for this episode. You know, there's the four types of roll ups, type one, and you can imagine this is the left side of the spectrum as being Ethereum compatibility, like pure Ethereum equivalence on the left side, type one, and all the way on the right side, type four has increased prover performance. So basically, flexing some of the abilities of having a non-equivalent to EVM, L2, rolling up to the L1 chain. It feels like we've moved, we started kind of on the far right side with a prover that was very ZK specific with Barry Whitehat's implementation. We've slowly come back to the left side, type one, optimism, arbitrum, base, etc. are all EVM equivalent. And yet, it feels like now we're maybe moving back in the other direction where people are starting to explore, or maybe they've been exploring the entire time, but maybe the availability of roll ups that are EVM equivalent starts to pique our interest for things that might do, you know, enable, essentially use L2. And I'm really thinking of P.S. Horn's work on the EVVM and Forth.Energy, where he's looking at, he basically treats L2 as a communications protocol for talking to L1, rather than sort of the paradigm that we're, most people are in right now, at least in the kind of popular roll up space, which is very EVM equivalent and be able to deploy your L1 solidity contracts directly to all these L2s simultaneously, etc. So Cookie Clicker is kind of an example of one where it's maybe not increased, or it's increased prover performance in some dimensions, like this single shot, as you've been describing. I'm curious if you have any thoughts on like, what are the interesting prover performance gains that we could have if we sort of move beyond EVM equivalence in the popular meta around roll ups?

Scott Sunarto: I'm like, I'm, I'm, of course, like very biased of this, because I think like, we are like one of like, I think like at Argus, we spend a lot of time like actually thinking about how can we marry things that we like about the EVM, like make the developer experience as seamless as possible on the EVM side of things, but also explore these new dimensions of like how we can customize and extend roll ups in a way that would enable kind of new applications or like specifically for us new kind of games to be built. Right. And so I think, I think the answer here is that there is a balance, right? For instance, things that I think would, I think is table stakes and should be table stakes is the ability for you to use kind of like the solidity compiler, right? You shouldn't need to have like a specialized solidity or a Fiber compiler to like deploy your smart contract onto like a certain like, like new roll up, right? Let it be optimistic or ZK, right? And so I think that's like kind of like the part of like EVM equivalency that is like really important, making sure like MetaMask like works without any additional kind of like, you know, snaps that you would have to use, making sure that like all Ethers.js works seamlessly with your roll up without any custom version that you need to use. So these are kind of the things that I think with regards to EVM equivalency that are important. It's the part where there are kind of like seamless dev tool and kind of like user tools are able to communicate with your roll up without any additional kind of like work that needs to be done, right?

Nicholas: The benefit from all of the ecosystem of compliance with solidity and with EVM. Yes.

Scott Sunarto: Yes. So like you want to have like equivalency on the protocol level, on the EVM level, not only in solidity level, right? I think that is kind of like what often, what a lot of people often say that they're EVM equivalent because you can deploy solidity smart contracts onto their blockchain. But like when you try to use MetaMask with it, you try to use Ethers.js with it, you try to use like Foundry or whatever, or hard hat, you realize that you can't do that seamlessly. And that's a problem. And because that means that the developers that want to build on your chain, they would now have to use a different set of tools. The users would need to install like different set of tools to be able to access your chain, right? And so that is to the extent where I think like EVM equivalent is like important and useful. Now, like I think there's an opportunity here to actually explore, like, how can we upgrade the EVM, like the EVM, like, you know, like the EVM blockchain, right? So I think like, this is like kind of the path of explorations where a lot of people right now, to like these like rule of service providers, for instance, are exploring, for instance, like using custom precompiles, right? Let's say you want to do, I don't know, like, let's say you want to do ZK verification on a protocol level, you can use that custom precompile for that. But also there are kind of like more complex things that you can do. For example, at kind of Argus, we basically have like, kind of like a second runtime that is running side by side, kind of the EVM that allows us to do very, very complex game computation. And like, we're able to do this computation and like in a way that is much more performant than what is possible on top of the EVM because we are able to optimize it on the bare metal level instead of like basically optimizing like the smart contracts are basically kind of operating directly on the system's bare metal. We have full access. Yeah, yeah, yeah. So like a precompile, like it's like a stable precompile and like you can manage this and like you can configure it and so on and so forth. We have this like sharding system where you're able to basically spin up multiple of these like game shards that run side by side with the EVM kind of like base shard and that allows you to horizontally scale the chain. So there's a lot of these like large dimensions of explorations that you're able to do. that kind of like opens up new doors for new applications to be built that previously wasn't possible before. And I think like these are...

Nicholas: Would you say you can achieve all of that by just essentially putting custom precompiles on top of something like an Optimistic Rollup. or are there other things that you're doing that are beyond just having access to some kind of compute that's running on the bare metal and available within the Solidity context? Is there something else beyond precompiles that you do?

Scott Sunarto: Yeah, yeah, yeah. So like we have like kind of like things like custom like I think like precompiles like the way that we think about precompiles are really just like. it's like kind of the cheat code for the EVM. The precompile allows you to escape the context. It's like basically the red pill of EVM. It allows you to escape the matrix of the EVM and go out to bare metal. That's like basically kind of what a precompile is. So like we're using precompiles to like escape the context of the EVM and then like basically be able to communicate with the game shards. So the game shards itself is not implemented as like as a precompile, but it uses the precompile to be able to talk with these game shards that exist on the bare metal land.

Nicholas: What kind of computation would you be doing in one of these game shards?

Scott Sunarto: Yeah, so the game shards are doing game like game simulations, right? So it would like the same things that you would kind of imagine a game engine like kind of like I wouldn't say like a game engine like Unity because like. I think that opens up a kind of worm that I think like most people will kind of be confused about. But like every single game computation, let's say I'm assuming you're familiar with Dark Forest, right? Yeah, yeah. So like for so like what you can do is like in these game shards is like is basically perform a computation for game logics, right? One of the most interesting things about the game shards is that it's like loop driven in nature. So in a traditional blockchain, like traditional blockchains are event driven in nature. Most applications are event driven like Twitter is event driven. And what I mean by event driven is that a state transition or like basically a state mutation only happens when there's a user event. Let's say you write a text on Twitter and you click the tweet button. That is an event that triggers like an action or like a state mutation on the Twitter backend, right? This is the same thing with blockchains as well, where let's say you submit a transaction and therefore a state mutation happened. However, if you if you've played game before, you realize that in a lot of games, even if the user is AFK, you're not touching anything on the keyboard. There is like, you know, like, you know, the game like time when the runtime continues, water continues to flow, physics continue to flow, the zombie that is behind you continues to chase you. So even without user interaction, without user events, there are these kind of continuous loop that is going on in the background. And so this is like the part where we are basically designing a game shard to emulate, right? We want to like have a runtime that or like execution layer that is much more akin to the traditional game runtime. And so that's like basically opens up a lot of really interesting doors. Like for instance, like now you can build like basically on a fully on-chain game that have a full physics simulation. You can have the idea of things like gravity. You can have the idea of things like water flowing through like, you know, like basically a plane. You can have like, you know, basically physics. These are things that in like a traditional blockchain, like, you know, like that is event driven in nature wouldn't be possible because like, it would expect a user transaction to be triggered for, let's say, to apply gravity, right?

Nicholas: And so like, with a loop, How does that? how do the shards relate? Like, let's say there is something like, like a physics system or some kind of runtime that is, let's say, corresponds to like, fiat time? How would that interrelate to the aspects of it that are being rolled up?

Scott Sunarto: Yeah, yeah. So like, basically, the way that we design a world engine is such that it's basically like a shared sequencer. And again, like this look, I'm very like, it's kind of like a kind of warm because shared sequencer is like another hour of conversation. But like, a shared sequencer just basically means that like, within a single sequencer, you have multiple execution layers, right? Like, like in like the cookie clicker example, cookie clicker rollup example, you have one execution layer and one sequencer, right? But with shared sequencer, you have like one sequencer, but multiple like execution layer. And so like, the base shard, the EVM base shard, the EVM runtime is like one execution layer. The game shard is like another execution layer. And then you can have multiple game shard and that would be like another execution layer. But all of that uses the same sequencer. So like, different shared sequencer have different constructions. Most shared sequencer right now that you see prioritize like synchronous composability and total ordering of transactions so that they have atomic kind of composability. For us, kind of like when we're designing a world engine, we quickly realized that like, we don't need kind of like synchronous composability. We don't need total ordering of transactions for kind of gaming use cases. And so we kind of very explicitly decided that like, we're not going to actually try to build a shared sequencer that has these like atomic composability properties because like this is just not a very useful properties for games. And that's actually like one of the toughest challenge to do in a shared sequencer. The reason why a lot of these shared sequencer are still like very much in research is because there are these like open-ended research question with regards to like, how do you resolve deadlocks? How do you make like these shared sequencer system like secure from like DDoS attacks or like whatsoever? Like if you want to do things like atomic composability. So if you watch my kind of like research day talks on like the world engine, there's like actually like a lot of like very interesting kind of like question marks with regards to like atomic like composability with shared sequencers that I don't think is like resolved right now. And so like, even with like the super chain, so the super chain is going to be like one of the shared sequencers that is going to have atomic composability as an end game. There's still this kind of like ongoing kind of discussion about like. how is this even going to be implemented, right? Because like you need to make sure that let's say if you are doing a transaction from like one rollup, you want to atomically compose like two rollups together. How do you make sure that like one rollup is not like intentionally kind of DDoSing the other rollup by like halting itself, right?

Nicholas: Like how do you... So for you, for the world engine, the problems are more akin to something like a real time multiplayer game where you're concerned with, I guess, synchronous timing of things rather than atomic transactions.

Scott Sunarto: Yeah. So with kind of like our design with the world engine, we explicitly just do not want like we explicitly do not want to have synchronous composability. We want to have asynchronous composability. So you are able to like, let's say, talk from like one shard to another shard, but you don't basically do this in atomic way. So like a contrast of this is like some things like flash loans, right? Like flash loans are atomic. because like within a single transaction, you're, let's say, performing as like. you're like borrowing money from Aave, you're swapping that on like let's say Uniswap, you're like liquidating something and then repaying kind of the Aave loan and so on and so forth with kind of like atomic. So like with like synchronous composability, atomic composability, you can do that. But with synchronous composability, you cannot do that. because like you cannot basically complete these entire transactions like in a single shard, in a single kind of, it is multiple way, it's multiple hops essentially. And so that is the trade off that we explicitly make. because like we spent a lot of time thinking about like, hey, like, like, what are actually the use cases of like atomic composability? And then like, the only examples that we can come up with are primarily just like around MEV and like DeFi. And like, like, we actually just realized that double spend. Yeah. And so it's not even double spend because like, like double spend, like is like very much not an issue with like asynchronous like composability because you can technically just lock, lock things and then like mint on the other end and just like, and so like resolve that using callbacks and whatsoever. So like asynchronous composability doesn't have any issue with like double spend. It's like the only trade off is synchronous. composability is primarily just the fact that you can't do like, like flash things. like you can't do a flash load, you can't do flash flop.

Nicholas: Anything that doesn't matter for the gaming context that you're, you're interested in.

Scott Sunarto: So like, we were like, for instance, like things that we want to do for games, like for instance, like, oh yeah, you want to have like the ability to, for you to travel from like one game shard to another. Right. So like the whole concept with a world engine is very much inspired by the server architecture of like games like World of Warcraft. Right. If you've played World of Warcraft before, you might be familiar with the idea of like shards where like, you can. basically, it would divide up the map into smaller chunks of maps. And like for every chunk of the map, they would have a dedicated server servicing that chunk. And so this allows games like to, like World of Warcraft to like horizontally scale and like support larger numbers of players that they otherwise wouldn't be able to. So this is kind of a similar architecture that we kind of like adopt for the world engine. So like, let's say you were trying to travel from like chunk A to chunk B, then you would just like make a synchronous call. You make an asynchronous call from chunk A to chunk B, then by saying that, hey, I'm leaving chunk A and moving to chunk B, make sure that my player location is updated to chunk B. And that that would kind of be done asynchronously. Right. You don't need atomic composability to be able to do that. And so like, that's kind of...

Nicholas: You're really, you're seeking that so that you can run games. that would be, where the scale of the number of users that you anticipate for the game would be challenging to run all inside of a single process.

Scott Sunarto: Yes, that is correct.

Nicholas: Got it. So, you know, we've kind of implicitly talked about it a little bit, but can you just give us a little overview of like, what's Argus? What are you building? How does the world engine fit in?

Scott Sunarto: Yeah, for sure. So like, Argus is a game publisher and studio and research lab. So we're doing a lot of things. But the key thing here that we want to do here is build cool launching games. But we realized very quickly that if you want to build cool launching games, you cannot really avoid building cool infrastructures. I think the situation right now with crypto is very much similar to the early days of crypto. Sorry, the early games of gaming itself back in the 1980s and the 1990s, when games like Doom and Unreal Tournament and all that just first came up. All the early game studios built their own engine because they don't have other companies to build game engines for them and they know their problem the best. And so this is the same case for us at Argus. We are basically building kind of like what we like to call the epic games of crypto, where like in epic games, they have Unreal Engine, which is their engine, but they also have a game studio that is building games like Fortnite. At Argus, we're doing the same thing where we are basically building the world engine as the engine that are going to enable our games. And then we also have a game studio and game developer arm, a game publisher arm that is developing games and publishing games that runs on top of the engine. And so that's basically kind of like what we do. So this is like why we spend a lot of time thinking about roll-ups, because the world engine is basically like a marriage between, it's basically how we would imagine game servers or game platforms would look like if it was like a roll-up. And so we kind of like basically blurred the line between what is a game server and what is like a blockchain, right? And so that's like basically...

Nicholas: Can you give me a sense of what kind of game, like, you know, you mentioned World of Warcraft, but do you literally mean an experience like World of Warcraft? or just in terms of sharding? Like what kind of game user experience do you think you can really...

Scott Sunarto: In the endgame, we can build games like... I think like the whole thesis with the world engine is to basically blur the line between like what can we build fully on-chain and like what? what? like what we can do in like Web 2.0, right? So like one of the interesting examples that we did was like, are you familiar with a game like Agar.io? Like the kind of like, you know, a real-time game where you move around and eat this kind of like little dots and collect points and eat other players?

Nicholas: Like a 2D kind of multiplayer sneak almost. Yes.

Scott Sunarto: So like a lot of people don't realize this, but like Agar.io is like very, very performance intensive from the server side because like all of this is happening in real time. There's a lot of players in the same map moving all the time and so on and so forth. So from a graphics perspective, it's like pretty shit. On the game backend perspective and game networking, it's actually pretty complex. And like we were able to build a fully on-chain Agar.io using the world engine. That is basically...

Nicholas: Is that something I can play right now?

Scott Sunarto: I can. Actually, I can happily send you like a kind of like a link on Telegram. Like I don't share this publicly, but like this is like I can send it to you on like Telegram.

Nicholas: Is it something you plan on launching more widely eventually?

Scott Sunarto: We plan to like launch this like more widely. But right now we're just kind of like trying to keep it more low key. We've been like kind of like. so we can see pictures of this, right? We have like shared the progress of us building this like fully on-chain Agar.io. like on Twitter before. You can just like look like kind of like. just like search my tweets for like Agar.io references. Like we have like screenshot of this before. And like these are not even like our core engineer projects. This is like. we hired two interns for Argus and they independently worked and like built out like this like basically like Agar.io clone that is fully on-chain. And that's like basically the first like real time like fully on-chain game that I know that have ever been built. So I'm like very excited about it.

Nicholas: Can you describe a little bit what's the player experience like? Like I connect to the website, I connect with my wallet, I sign a transaction and then I just have a seamless experience or am I sending transactions periodically? How does it feel to use?

Scott Sunarto: Yeah, so that's a great question. And like surprisingly, like it's like even better than like what you kind of just described. Like you don't even need to like worry about wallets, right? So like as a part of World Engine, we also built out our own like account abstraction mechanism. So like our chain basically have like what we like to call a native account abstraction. So like in Ethereum, like in Ethereum L1, like specifically, the way that we do account abstraction is like through smart contracts, right? The Ethereum layer one doesn't know that there is like the feature account abstraction. It happens through a smart contract. Like when we're designing GameSharks, like for World Engine, we know that we will need account abstractions such that we designed this from the get-go on a protocol level. And that makes the whole integration much more easier and we're able to do more interesting stuff with this account abstraction system as well. And so that's like basically what we adopted for basically the onboarding for like the Gar.io game and like another game that we've been working on that is currently undisclosed. But like, yeah, when you open the website, the only thing you need to do is like select a server, select a region that you want to play in. Like we have Europe and America deployed and then you will be connected to that server and then you can just like play directly. And then we have like basically kind of like a burner wallet. You want to think about it that stored on your kind of like browser local storage such that that's basically your identifier. You don't need to use MetaMask. if you don't have MetaMask or if you're not like really ready to use MetaMask. It will just like seamlessly logs you into the game. And then like one of the features that we plan to add is to basically like eventually once let's say you like the game and you've made significant progress using your burner wallet, you can secure and attach your account using MetaMask. And so that you can like recover that account later. Let's say you kind of like clear your browser cookie or clear your browser local storage or trying to move to like a new browser or a new computer. So that's basically.

Nicholas: So essentially the experience is I don't need to sign anything. You don't have to sign anything.

Scott Sunarto: Because like if you sign anything for an Agar.io game, you would have like 20 pop ups per second. So like it's just like physically not possible to like have MetaMask pop ups for an Agar.io game. So it is fully abstracted away from you. Like there's no signing that.

Nicholas: You abstract the signing because you just have like an EOA and local storage. But on the back end, how often is a signature required?

Scott Sunarto: Yeah. So you basically kind of like for every game move, like every game move, you need a transaction. But however, you have like a very little, you have like a very interesting kind of like trick that we do where we do like basically kind of these like signature aggregation on the account abstraction bundler. So instead of like having, sorry, I'm going to like sneeze a little bit here. But like, but like, yeah. And so in the account abstraction bundler, instead of like having to like submit a signature for every transaction, the account abstraction bundler, if like, you know, if like, you know, if like if you configure it to basically like kind of aggregate a signature and basically provide one signature for like thousands of transactions.

Nicholas: So like that's why you include the AA directly into the chain.

Scott Sunarto: Yeah. So like that basically allows us to significantly reduce the footprint for data availability costs, because like one of the largest, like, you know, like, if you kind of like measure how much space is taking up the transaction data costs in the layer one for like wallops, like the majority of it is like the transaction signature. So like, if you are able to aggregate multiple transactions from the users into a single like kind of like signature, you saved up like a significant amount of the costs. And that's like kind of like an optimization that we made as well, kind of like we'd use like the transaction, like call data costs, storing that to layer one. So like, that is kind of the way that we're kind of like, one, like makes the user experience, like better, make the performance for the abstraction bundler better. And like on top of that, also save some costs, I'd like to save some costs on like the layer two, like operations and like the final gas costs that the user have to make. So that's like kind of like a pretty new trick. that kind of like. we're pretty excited as well.

Nicholas: That's sweet. So, you know, we've been talking for a while, so I don't want to keep you all night in Jakarta. But, you know, if people want to learn more about Argus and World Engine and all this, where is a good place for them to check that out?

Scott Sunarto: Yeah, so like, our Twitter account is the best place to like, really kind of follow our updates. So like, the Twitter account is like Argus Labs. My Twitter account, I post updates here as well. We also have like our website. if you're interested in kind of just like, learning more about like what we do. We also have a blog post that kind of like basically describes the World Engine. I did a talk one at Research Day. If you just type up like, you know, DBA Research Day World Engine, you should kind of like see my video talking about the World Engine architecture. I also did a talk at Modular Summit about World Engine. So if you kind of just like search like World Engine Modular Summit, you'll kind of like also see my talk about it. But like, yeah, like again, like I'm, I have my DMs open on Twitter if anyone's interested in learning more like Argus, World Engine, or just kind of Google Apps in general or later too. So I'm happy to kind of like, kind of jam on DMs.

Scott Sunarto: Yeah, I think like in general, like, I think like the nature of the space right now is like very much similar to like the early, like again, like as I said before, the early days of like computer games, right? I think like everyone is like exploring different ways of like building their like game engines right now. Like, so like, like we have our approach and like we know the kind of games that we want to build and the kind of engines that are going to be able to support those games. Again, like some people like might want to do kind of like similar kind of games, similar kind of approaches, or maybe share the same philosophy as us. And that's like why they use a world engine. But like, yeah, I guess like at the end of the day, I think like a lot of people are really just like exploring like the best way of like how they can build kind of your games. And I think like in general, I think like Autonomous World is like really one, kind of like really kind of one category within this like larger category of like basically on-chain games, right? I think Autonomous World onto itself is like a very specific philosophy of designing games, a very specific philosophy of like basically like, you know, doing world building and so on and so forth. It's like, it's not like something that we are necessarily like, okay, we are only doing Autonomous World, but it's like definitely something that we are kind of like interested in, but it's not something that we exclusively are building. And so like, yeah, in general, the world engine goes so much more beyond than just like for building Autonomous World. The world engine is really designed for like basically on-chain game developers, right? I think like our grand vision with the world engines is to blur the line between like building like Web2 games and like Web3 games such that like we can have more things on-chain. My whole thesis with on-chain games is that if we have more things on-chain, we have more emergent behavior and more emergent behavior means like interesting experiments getting built. And that's like kind of like why I'm like so excited about like, like kind of encouraging people to build more on-chain games, like let it be using world engine or like other like on-chain game engines of choice, right? And so like, yeah, that's really kind of like, my kind of like answer here is that it's like different and kind of like projects, like building different game engines, like early days. And I think like in general, I'm like pretty excited about different like directions that people are taking and kind of designing their kind of like quote-unquote game engines on-chain.

Nicholas: Just before we go, are there any like experiences that are live even on Testnet or any kind of gaming in this space that people can access right now that you're excited about that we could point people towards?

Scott Sunarto: Well, I would like check out Primordium. I don't know if you guys have seen it.

Nicholas: No, I haven't seen that.

Scott Sunarto: Yeah, yeah. So like, yeah, Primordium is a great like on-chain game that I kind of like my friends are building. It's basically like a fully on-chain Factorio. Like, yeah, I highly recommend checking it out. Like, you know, they did a recent update. The art style is like going to be amazing once they launch this new update. So like definitely check out Primordium. There is like other games that is not in Testnet right now, but it's like kind of getting pretty close to like it. Like there's like kind of like, yeah, I think like, oh, actually it's like already on like, oh, it's already on Testnet. Yeah, it's like, it's like SkyStripe. So Lattice is like working on SkyStripe. Like they've been running playtests on it. I think like this past few weeks. So like, I don't know if they're still running it, but like if they're still running it, definitely check those out as well. And like, yeah, I think that's probably kind of like fully launching games project that I could like kind of highlight.

Nicholas: - Very cool, I'm gonna have to try them. Scott, thank you so much for coming on today. This was an awesome conversation and I learned a lot about L2s.

Scott Sunarto: - Thank you for having me.

Scott Sunarto: - Thanks everyone, see ya.

Nicholas: - Hey, thanks for listening to this episode of Web3 Galaxy Brain. To keep up with everything Web3, follow me on Twitter @Nicholas with four leading ends. You can find links to the topics discussed on today's episode in the show notes. Podcast feed links are available at web3galaxybrain.com. Web3 Galaxy Brain airs live most Friday afternoons at 5 p.m. Eastern time, 2200 UTC on Twitter Spaces. I look forward to seeing you there.

Show less

Related episodes

Podcast Thumbnail

Derek Chiang, CEO of ZeroDev

7 December 2023
Podcast Thumbnail

Zero Knowledge with Brandon H Gomes

15 November 2022
Podcast Thumbnail

Ahmed Al-Balaghi, CEO of Biconomy

29 November 2023
Scott Sunarto Built a Toy L2