Web3 Galaxy Brain đŸŒŒđŸ§ 

Subscribe
iconiconicon
Web3 Galaxy Brain

Parthasarathy Ramanujam, Co-Founder & CTO of Etherspot

26 March 2024

Summary

Show more

Transcript

Nicholas: Welcome to Web3 Galaxy Brain. My name is Nicholas. Each week I sit down with some of the brightest people building Web3 to talk about what they're working on right now. My guest today is Partha Ramanujam, co-founder and CTO of EtherSpot. EtherSpot is a 4337 account abstraction service provider. On this episode, Partha and I discuss the Pillar Personal Data Locker and how it led him and his team to account abstraction and founding EtherSpot. We get into the details of their AA bundler, React library, and embedded wallet services, and we also discuss the exciting new P2P bundler they've been working on, which is seeing adoption from peers in the space. It was great getting to know more about Partha and EtherSpot. I hope you enjoy the show. As always, this show is provided as entertainment and does not constitute legal, financial, or tax advice or any form of endorsement or suggestion. Crypto has risks, and you alone are responsible for doing your research and making your own decisions. Partha, welcome to Web3 Galaxy Brain.

Partha Ramanujam: Thanks for having me, Nicholas.

Nicholas: I'm excited to talk all about EtherSpot. I guess first up, what is EtherSpot?

Partha Ramanujam: Right. So to talk about EtherSpot, I should probably talk about the company called Pillar. Pillar was one of the earlier ICOs back in 2017. We started off as a group of volunteers who wanted to build a personal data locker, and Pillar Wallet was our first product. We built a mobile-based wallet that worked on Ethereum, first back in late or mid-2018. And based on our experiences of building a mobile wallet back then, we slowly navigated towards smart contract-based wallets. And we were probably the first company to release a smart account back in early 2019, which worked only on Ethereum. But unfortunately for us, immediately after we released, we released this smart account. We called it ARKanova. That was the SDK. The gas prices on Ethereum spiked, and we ended up sponsoring close to $200 per account for deployment, which really screwed up a lot of user experience for us. And it was around that time. then we had a lot of EVM-based Alt-L ones coming up, like Gnosis, Xtai, and Polygon. So what we did, we used our experiences gained from our launch of the smart account and built a cross-chain or a multi-chain smart account SDK, and we called it Etherspot. But this Etherspot, and we released a new version of Pillar Wallet, which had support for five EVM-based chains, and we were auto-deploying for the user. On Gnosis and Polygon. And eventually it moved to only auto-deployment on Gnosis, because again Polygon was having a lot of activity and sponsoring was not ideal. So we let users decide to deploy based on whether they wish to interact with a particular chain or not. So again, all of these smart account frameworks had support for gasless transactions, batching, and all other aspects. But the single point of failure, given, I mean, it was a centralized system in one way or the other. So there was always a single point of failure. And we were always on the lookout to decentralize, because given our, I mean, the main aim of Pillar was to build a personal data locker, which so the ethos of Pillar and of us at the core, was decentralization. And we always wished to decentralize their infrastructure. And we were right in time when 43.7 spec became quite popular. There were a lot of people building on this new spec, and we decided to support the ERC and build our own offering, which is compliant with ERC 437. And that gave birth to ethospot. And we incorporated ethospot as a separate entity as well here in London. And we've been working closely with the Ethereum, I mean, ERC 437 core team in looking to decentralize the bundlers. So although bundlers as a spec was available and the core team had this on their roadmap to enable a P2P layer for the bundlers, which would ensure the infrastructure is not centralized to one particular provider. We decided to help and we wrote the spec and came up with a reference implementation of the bundler with the shared mempool setup. And I mean, I'm breaking the news here. We are going to make this public next week. Great. It's nothing.

Nicholas: So the, the big announcement, Does it have a name, or do we have to wait for that?

Partha Ramanujam: It's called a shared mempool, or rather the mempool of ERC-437, and we are launching this live next week.

Nicholas: Wow, amazing. And is that coming solely from EtherSpot, or is that a collaboration with other projects?

Partha Ramanujam: No, EtherSpot wrote the spec, but we have four or five different other bundler teams who have implemented the spec. There have been contributions from several other teams like Alchemy, Silius, Candid. All these teams have implemented their bundler as per the P2P spec, and we'll be launching with a multi-client or a multi-bundler implementation-based testnet on Ethereum first, and slowly moving on to other L2s and AltL1s as well.

Nicholas: Wow, amazing. Okay, I have so many questions. So the P2P bundler, seems to have traction then. Sounds like a bunch of teams are integrating it, so people who are interested in decentralizing bundling or decentralizing the mempool are on board.

Partha Ramanujam: Yes, yes, of course. And decentralize, I mean, that's one good thing of Ethereum, right? Decentralization is something that everybody are keen on, and they are happy to support any initiative that looks to decentralize the network. And they've always been on board. And we've had, I must give a shout out to all the teams and all the developers who have helped and helped towards this. Yeah, it's been a pleasure working with them to see this step through.

Nicholas: There are a lot of teams working with 4337 right now. It's pretty remarkable.

Partha Ramanujam: Yeah, of course. And I mean, I'm not sure if you've heard of the Mafia group, right? We have something called 4337 Mafia, and yeah, I think there are close to 1,900 members there. And yeah, and the group acts like an advocate, the advocate for this spec. And yeah, they try to influence other dApps and chains to be compliant with the spec. So it's been really great working with the community.

Nicholas: That's nice that there's this shelling point where people are working on it. It is amazing how many startups and projects are built off of it and actually have heard, I was curious because you mentioned this whole history of Pillar and being in smart accounts, or I suppose you said personal data lockers, since 2019. A couple of questions about that. First of all, is one of the things that's shifted the kind of abstract presentation of the thing as more of a wallet or more of a smart account and less of a data locker?

Partha Ramanujam: So the data locker is still in the roadmap of Pillar wallet, as such, but in order to achieve that, the vision of data locker was based out of a book by our former CEO called David Siegel. He wrote something called as a book named Pull. I don't know when it was back in the 90s, I believe, which had various, these ideas about semantic web and the vision of having users or rather individuals control their own data. They don't share information to the people they choose to. So that's what started off the journey of Pillar. And because in late 2017 and early 2018, the infrastructure was not as developed as it is today, our first gateway to realizing this vision was a wallet. But once we went down the lane, we saw majority of the users of Ethereum or blockchain, and that space were mostly focused on finance and finance-based applications. So we went ahead and built the wallet specifically for DeFi. We still have been built, I mean, we still have some idea to continue with the vision of the data locker. A lot of aspects, especially after Filecoin and a few other networks that have come up and with advancement with fully homomorphic encryption and other things, a lot of the ideas, we had around the personal data locker is now realizable, at least as a POC. So we wish, from Pillar's perspective, wish to go down that route as well. But we'll see. It depends on how things go ahead in future and where we are.

Nicholas: I guess that was kind of part of my question is, did you realize something along the way, since way back in 2019, that maybe the product market fit, in terms of user adoption, is more sending transactions or, you know, do especially trading rather than data services right now?

Partha Ramanujam: That's correct. Yeah. So that's the reason our focus shifted or we pivoted slowly towards finance applications and DeFi. But we haven't forgotten on the original roadmap of building a data locker.

Nicholas: And I wanted to ask about Pillar itself. You said it was it was a DAO to begin with. How did everybody find each other to get involved?

Partha Ramanujam: As I said, in 2017, it was a DAO. It was just growing and it was immediately after the DAO hack. But there were a lot of projects looking to build over that. We came across on maybe a Discord or a Slack channel working on. I mean, I basically at that time I was in India. I was looking at research to find a solution. I mean, India had an ID based solution called Aadhaar, and there were a lot of vulnerabilities reported on the Aadhaar infrastructure. And there were complaints of data leakage and other things. So I was looking at a solution that we could build towards that. And I found other like minded people out here in England, the U.S. and who who were keen to build something on semantic web. And I just happened to meet them on some Discord or Slack forum that time. And we got together, we debated, discussed and how we could. Go ahead and build this. And then eventually we decided, OK, ICOs were crazed back then. We thought, why not write, put out an idea out there and see if we can find support and we can raise funds. And we managed to raise close to 30 million back then. And yeah, and that's how we started. In fact, I was looking at it the other day. We had a 10 hour long YouTube video, which we were telecasting back then during the ICO. I just can't believe we even recorded that 10 hour video. Yeah, it's a 10 hour video. It's still on YouTube where we were tracking our the ICO progress. Oh, wow.

Nicholas: Amazing. Amazing. That's so exciting. So that's very cool. So I get the sense that sort of decentralization and global sovereignty issues, et cetera, cypherpunk issues sound like they sort of were right at the origin of this community. That has become EtherSpot.

Partha Ramanujam: That's right.

Nicholas: Yeah, because I think that's not that's not the case for everybody. There's a lot of more sort of VC style, traditional startup operating in the space. Not that that's bad, but it is interesting to have something that really started as a DAO and an ICO kind of evolve culturally into something that's providing infrastructure for smart accounts. That's kind of kind of unique.

Partha Ramanujam: That's right.

Nicholas: So, OK, so you've got this P2P bundler, I guess. Or maybe before we move on, were there any other like major lessons from and maybe you could list off some of the EIPs that you were thinking about during the that period of time where you're working on different iterations of Pillar and pre EtherSpot? But, you know, like what I feel like the lesson that came out of what you said earlier is that the decentralized mempool was really the key missing feature of all prior EIPs. Is is that accurate?

Partha Ramanujam: Yes, that's correct. In fact, we've. Pillar was had collaborated with the previous I mean, the core team of ERC 4337 used to be the gas station network. And back then, I think in 2019, we had interacted with the GSN team and looking to join the gas station network as such. So we were in line with aligned with their vision of having something as a gas station. And. And trying to decentralize the network even back then. So that was always our idea. And slowly, when 4337 spec came out, we did debate around the other account abstraction EIPs like 3074 and the one previous. I put the number fails me. But we did think about around that. But when 4337 came around, we were very clear that that is what we should build on. And yeah, we decide. To choose that as the option.

Nicholas: And really, what made it unique was this decentralized mempool as opposed to the prior options where there were always a centralized element of centralization. Yeah. And but it is interesting that it evolved out of gas station network in these prior solutions. I think for people thinking about sort of the long arc of things that sometimes take, I don't know, seems like many years to show up that it may not be the first iteration. But ultimately, some of the people who've been working on it for a long time end up being the ones who get it across the line. But there has been recently some skepticism. I did an episode of this show with a panel about 3074 versus 4337, which are not contrasted, really, but are different approaches to enabling some overlapping features. Some people are concerned that 4337 introduces more complexity, introduces more, you know, a whole separate mempool. that's, you know, maybe, I don't know, I suppose some people are just uncomfortable with the whole complex mempool. But I think it's interesting that 4337 is a very complex architecture of 4337 in general, or I'm not sure what else other people are skeptical of exactly, because it's not simple, and it doesn't upgrade existing accounts or other things people are worried about. What is it that makes everyone so confident that 4337 is going to happen? Why is everyone so or most people so sure of that?

Partha Ramanujam: Right, I'll probably take a couple of, I mean, I have a couple of answers for this. Firstly, I would like to borrow your, your wise word. Yeah. Account abstraction is easy if you don't care about decentralization. So most of the other account abstraction EIPs around there, I mean, they do not cover all aspects, they do not take all aspects that require in terms of, I mean, the rules or the standards set by EVM in terms of decentralization and such, right? Yeah, for sure, you have options like 3074, which is essentially complementary to 4337. But the reason 4337 is so complex is because it tries to be as true to Ethereum's core ethos as it could be. And it's, it's not just, what do you say, there's several aspects of account abstraction, like first, you have a wallet, an account, you have gas abstraction, then you've got, what do you say, network abstraction, you have various other features that, to some extent, are not as true to Ethereum's core ethos. And then together, you could call it as account abstraction. And for sure, people might find the EIP4337 spec to be too complex, and that might put them off. But there is a reason why it's that complex. And only if you were to implement all of those features, would you be able to achieve the truly decentralized account abstraction as possible. And you would have to note that. Yeah. The reason it's being so complex is because there was no consensus yet to achieve native account abstraction or native smart accounts within the EVM network. And ERC4337 takes the path of lease intervention, right, without having to affect the core protocol, still achieve smart accounts. So that's something. It's like, how do I say, if I were to use a terminology, you're trying to change the tires of a car while driving it. I don't know. So that's what it tries to achieve.

Nicholas: It's complex because you're not able to change the EVM. Correct. It's complex because it's being executed in an application, or in a standard at the application level.

Partha Ramanujam: Yeah. Without having to break anything that is already working, being absolutely backward compatible with everything else and ensuring that it doesn't affect anybody running the infrastructure, for example, bundlers or paymasters in any way. So it provides protection against DOS. At the same time, ensures bundlers are not grieved by any bad actor within the network as well. Yeah. So all these require these, I mean, all of these, what do you say, securities and require certain complexity. And that's the reason why it's complex.

Nicholas: In a recent episode with Doan from Clave, they gave a little hint that EtherSpot might be working on something like this P2P bundler. And they mentioned that the challenge was that there's no way to trustlessly communicate from one block builder to another. Is that, or maybe, can you clarify a little bit about what the challenge is of building a P2P bundler and how you've come up with a solution for it?

Partha Ramanujam: Right. So the main challenge when it comes to P2P is that, I mean, MEV is present and front running is a major problem on all chains at the moment. Right. On EVM based chains for sure, given the large volume of traffic. So one of the ways. Yeah. Yeah. One of the ways that users or dApps protect their users against front running is by, at least on EVM, on the Ethereum mainnet, they use something like a FlashPort's protect endpoint. Polygon also supports, and BSC also supports similar block builders where the actual transaction is not revealed until it's already confirmed or included within a bundle. L2s still do not have such a solution. But because of that. Because most L2s like Arbitrum and Optimism use a centralized sequencer, they can, the sequencer decides how to sequence transactions. So in a way, front running at some level can be avoided with the help of a, what do you say? A fair sequencer. Now, when you introduce an alt mempool in bundlers, the searchers or bots have access. So that additional, I mean, latency, because the way the 4337 spec works is that a user does not submit a transaction directly to the transaction mempool. They submit something called as a user operation, which is held in a bundler first. The bundler then bundles it with other user operations within its own local mempool and then relays it onto the main transaction mempool. Now, it's that much easier for a searcher to monitor the user op transaction mempool, pick up a profitable transaction and front run it even before your original user op reaches the transaction mempool. So the possibility of getting front run now is a lot easier than what it was by when a user directly submits a transaction to the transaction mempool.

Nicholas: Now, let me just see if I understand correctly. So first of all, a small question. How does the user connect to the bundler in the first place? Does the RPC somehow route it or the DAP is the one that propagates it or the wallet?

Partha Ramanujam: It's the wallet that routes it to the bundle. Okay.

Nicholas: So the whatever application is managing my key is the one that will. Okay. The one that signs the user operation. That's correct. Yeah. So it gets propagated to a bundler, which is, and bundlers, there's many different bundlers. You have a bundler as well, Skanda. Is that right? That's right. And then what you're saying is that the MEV is not only someone, because I believe in that Clave episode or another recent episode that someone mentioned, actually, no, I think it was Pimlico where they mentioned, Christoph mentioned that the bundlers are at risk of being front run themselves. But you're saying that they might not, an MEV bot might not only, like a searcher might not simply, let's say on L1 or a chain that has. Yeah. Not, is not as simply a centralized sequencer. An L1 searcher might remove, analyze a bundle and not just front run the whole bundle itself, but instead front run individual user operations within that bundle that are profitable.

Partha Ramanujam: Yeah, that's correct. But so taking a step back as of today, bundlers do not have the P2P mempool. So what happens is when Pimlico runs their own bundler host. Bundler or ether spots runs our own hosted bundler and applications submit their user operation to us. The local mempool within these bundlers are only accessible to the bundler. Nobody can peep into it and analyze what are the user operations present in them. So a searcher looking to front run can only get this information from the transaction mempool of the L1 blockchain. Right. Now, imagine when as soon as the P2P interface is enabled, user operations that are sent to ether spot bundler would get propagated to Pimlico's bundler as well.

Nicholas: Is that, does that challenge the business model of these bundlers? or typically people are paying, dApp developers are paying or wall developers are paying for bundlers right now?

Partha Ramanujam: Yeah. The dApp developers or the. basically every bundle that is processed by a bundler. They, the, the fees of that is collected by the bundler.

Nicholas: Right. Of course. Okay. It's collected, collected by the bundler itself. So there's some, it's a little bit challenging to, or there is some inclination for bundlers to hold private user operations as well and not share them. or how does that work out?

Partha Ramanujam: There is an incentive for users to maintain that way, but that is applicable even to the existing nodes. Right. Not necessarily for this, this situation that you discussed. What you describe is not unique to bundlers alone. Even in the regular Ethereum nodes, any transaction that I submit to a node right now, the node can decide not to propagate it to peers. They could always route it to the builder API themselves and get benefited in some way or the other. So essentially the, the situation you described is applicable. Even to bundlers, but it is in bundlers own, I mean, benefit to even propagate it because there is a possibility that the bundler might not find the user ops it receives to be profitable for it to process. So after a while, its local mempool would get full and it might not process, be able to process it.

Nicholas: And it wouldn't just drop unprofitable user ops?

Partha Ramanujam: It, it, it could. It could drop non-profitable user ops. And what that would do is result in a user being unhappy and choosing a different provider for that matter. Right. So that's possible as well.

Nicholas: But in general, what you're saying is that the default for mempools for the L1 transaction mempool is open, an open mempool. And while there is, there is some private mempool activity, it's not the majority.

Partha Ramanujam: That's right. So when I talk about the mempool here, I'm referring to the user of mempool, not the regular. The regular transaction L1 mempool.

Nicholas: But it's the same logic as the L1 mempool where, yeah, okay.

Partha Ramanujam: So coming back to the, the problem or the challenge I was describing earlier. Now, once the P2P layer is enabled, every user op received by etherspot, a bundler would get propagated to Pimlico as well. Now, a searcher, which used to monitor the L1 transaction mempool to pick up advantages or transactions to front-run, would have an additional advantage by just monitoring the user op mempool, because these user operations have not yet been related to the transaction mempool. So they can, in theory, front-run it even before the user op reaches the transaction mempool.

Nicholas: So hold on. I want to run through all the different types of searching here. So the first one, which I have a question about is, you know, you spend all this time, energy, money, et cetera, building a bundler, collecting data. Using user ops, bundling them, and then propagating them on chain. And yet an MEV searcher can simply front-run that transaction and gather all the profit from doing so, correct? So you have to, I suppose, flashbot all these bundles. Yeah. So private mempools at the L1 or at the chain, whatever chain you're operating on, are important. Correct.

Partha Ramanujam: As of today, all the hosted bundler services like, say, etherspot or Pimlico use flashbots protect endpoint to relay their bundles. So in order to avoid being front-run.

Nicholas: Okay. So that's the first kind. And already private mempools are important. And maybe we can talk -- I'm curious. Do you have ideas about private mempools? Or is that a different subject than the one you're focused on?

Partha Ramanujam: No, no. Private mempools will continue to exist. But the objective of -- so once P2P layer is enabled on bundlers, bundlers will slowly start to act as builders themselves.

Nicholas: Oh, interesting. Okay. Can you tell me more about how that will work and more about the builder role?

Partha Ramanujam: Yeah. So the situation I was describing earlier, now, because if you -- because of the P2P layer, a searcher monitoring -- now, say, I am a searcher. I could spin up my own bundler, connect to peers, and etherspot bundler would then send me a propagate user operation to me. I monitor that. And I could front run it to the original L1 transaction mempool as soon as I receive it. Now, in order to avoid this, what we have to do is that bundlers have to function as builders. So what bundlers would do will start building -- I mean, bundling their user operations from the local mempool and automatically talk to a builder API. So on Ethereum mainnet. Every bundler would talk to Flashport's API or could talk to Merkle's API and relay their transactions directly to the builder for it to be included. We are looking to see the latency associated to propagating of these user ops to processing of bundles submitted by the original bundler. The bundler would take into account or would be able to overcome any possible MEV front running that might happen. We are testing out this on Cephalia right now. And we would eventually roll out on L1 chain and run a few tests for a few weeks. And once we are satisfied, we'll do a complete rollout for everyone.

Nicholas: Wow, very cool. So there is some convergence or at least expansion of the role of builders into affecting the blocks of the chain that they're writing them to more comprehensively.

Partha Ramanujam: That's correct, yeah.

Nicholas: So does this mean -- I guess, where do you think this? -- I don't know, in a year from now or two years from now, what is the -- a bundler will be combined with what else? A block builder? Anything else? Things around, I guess, paymasters often come together?

Partha Ramanujam: The paymasters are separate. They are more kind of a gateway. They are more like a gas station, right? They provide the ability for users to pay gas on their behalf. But bundlers would eventually end up being builders. Based on our research right now, we have managed to find a block building API and get bundlers to behave as builders on Ethereum, Polygon, and BSC. For L2s as well, we are devising a strategy. So given that most L2s right now have centralized sequences, Arbitrum has built one RPC endpoint called conditional transactions. So that is essential in order to avoid on-chain reverts once the P2P mempool goes live. And we are in touch with other L2s like Optimism, Bayes, and others. To introduce the same RPC endpoint on their sequences so that we can launch the P2P layer on these chains as well.

Nicholas: Sorry, so that's to remove duplicates? To avoid publishing duplicates? Yes, correct. But you said that that's implemented in the sequencer itself?

Partha Ramanujam: The endpoint is implemented in the sequencer. So the sequencer automatically will remove any duplicates before they get added to the block. If this endpoint is not available, then there is a possibility of an on-chain revert because the sequencer will not be aware or tell two transactions to be the same and include them on-chain where it could fail.

Nicholas: Oh, and this is important because the bundler needs the user operations to succeed in order to get paid?

Partha Ramanujam: That's correct. If there is failure, the bundlers do not get paid.

Nicholas: Right. So the sequencers are going to add this additional functionality so that these kinds of failures don't happen. And that's going to enable you to do more than just sort of dedupe the P2P mempool, but also going to allow you something in terms of block building or MEV, you were saying?

Partha Ramanujam: Block building and MEV on L2s will have to wait because there are no MEV protection strategies available on L2s yet, or there are various options being discussed. In fact, I think last week Project Shutter announced working with Espresso Sequencer to bring about an encrypted mempool on Gnosis chain and other L2s. So that would be an interesting thing to work with. Once these strategies come about, then probably MEV protection on L2s will be useful or will be helpful. And then bundlers can probably work. Work with such tools to function as builders.

Nicholas: Got it. Have you been paying attention to the 7579 minimal modular smart accounts?

Partha Ramanujam: That's right. Etherspot is also building its own modular accounts. It's going to be audited right now. We've been in touch with Rhinestone, the authors of that. And yeah, we will be releasing our own modular wallet quite soon.

Nicholas: Can you tell me anything about the sort of how you think people are going to actually use smart accounts? I mean, there's starting to be different options. Coinbase just launched something. There's a large handful of apps that are already out there. And I know you've also got a SDK. Is it a transaction kit that makes this easier to do?

Partha Ramanujam: Right. So Etherspot has several offerings. SDK is the prime SDK. And transaction kit is essentially a reaction kit. So it's like a React library, which lets you build account abstraction applications directly using React tags. It comes with a headless UI, but you are free to add your own components over it. And we abstract away even the complexity of interacting with the SDK by just introducing tags. So back to your original question, Alchemy has its own proposal, I think 6900, which is, again, another implementation. It's a standard for writing modular accounts. And 75c9 is another simpler modular account framework. So in terms of modularity, it's gaining popularity. And there are several implementations of modular accounts on Polygon and other L2s right now. And I'm pretty sure modularity would become the go-to strategy for smart accounts. In the near future. I think it's already the case. And with Zerodev also releasing its own version of the modular framework with several other plugins being introduced. I think Rhinestone guys are talking about a module a week and how to build your own modules. Yeah, I think that's going to be the way forward. And that's the reason why Etherspot has also decided to release its own version of the modular accounts.

Nicholas: Yeah, I was sort of wondering if I had the Rhinestone team on previously. And I was wondering if you thought that it was premature to do modular or not, given that smart accounts still are not quite over the hump of adoption. But it sounds like yes, is the answer. Yeah.

Partha Ramanujam: I mean, on the terms of I agree with you in terms of adoption, it's smart accounts as such. You cannot compare it in any way with smart accounts. And I think that's the way to go. So much so that when Pillar started off with our wallet, we had close to 300,000 users back in 2019. And when we introduced our own smart accounts, we butchered our own user base. It came down to, say, 3,000. So we have scars to say. I mean, I'm not going to say smart account is the way forward. We've lived through the scars.

Nicholas: That was just because of the timing of launching the original wallet or something about that? Smart accounts that made them less popular.

Partha Ramanujam: It's kind of the timing. As I said, as soon as we launched, the gas prices spiked. And then there were various other factors like new chains and other things. So I'm not drinking the Kool-Aid to say that everything is going to be smart accounts. I'm more realistic. But I think in terms of design, for sure, smart accounts, modular smart accounts are the way forward. And when we launched it, we had a lot of customers. We do get an adoption for smart accounts. I'm sure if your solution is modular, it will help them achieve a lot more than what they could if they aren't modular.

Nicholas: How does EtherSpot position itself within this ecosystem? Because you mentioned this transaction kit is like a React library. But as far as I understand, you don't offer your own framework. Do you have a 4-3-3-7 account implementation or launching? I guess which parts of the infrastructure are you interested in being involved in is my question.

Partha Ramanujam: Okay. EtherSpot offers the whole stack, right? We offer our own smart accounts framework. What we have right now is the non-modular one, which is already live. So much so that Fuse chain uses our tech stack completely. Our accounts. Our bundlers. Our paymasters. They have it completely integrated within the Fuse SDK. And if today you were to deploy or use Fuse, you get an EtherSpot smart account. All your transactions are processed by Skanda's, Skanda bundler. And if you were to use any app on Fuse network like Voltage Finance or Bitaza Wallet, you are using EtherSpot's smart account. So, we offer all of them. The smart accounts. The bundlers. The paymasters. And through our transaction kit, we also make it easier for developers to build any app with inbuilt transaction. I mean, smart account framework support.

Nicholas: Oh, great. So, full solution. And I guess, who is the ideal customer for you? An app dev or a wallet dev?

Partha Ramanujam: It could be both. An app, a wallet. If you are interested in just using our bundler services, for sure, you can do that. If you are keen on paymasters, you could do that. And we also offer account abstraction services to any new chain that comes up. So, we are going to be -- we are already the default bundler on Fuse. As I said, we are one of the -- we provide one of the account abstraction services on Mantle, Scrawl. We are deploying our infrastructure. We have an infrastructure on Rootstock right now. One of the first BTC-based EVM chains. And yeah, there are several other chains that we are part of. And there are dApps that uses on those chains.

Nicholas: That's -- okay. So, it's a full service option. And how do you price the service?

Partha Ramanujam: Right. So, our -- we don't charge anything for -- in terms of gas or anything. So, all our services are like usage of our infrastructure. We -- so, it's like -- what do you say? An account abstraction infrastructure service. We offer free accounts. And then you have different pricing plans with appropriate number of -- what do you say? Rate limits associated with each plan. We've got a developer plan. We've got a startup plan. And our users are free to use anything -- pick and choose whatever they want. For example, they could use zero devs. Smart accounts. But use our bundlers and paymasters. Or they could use our smart account contracts with anybody else. Pimlico's bundlers and paymasters. So, we are -- our tech stack is completely composable. And you could pick and choose whatever you want to work.

Nicholas: And if people were to do that, is there a coordination layer? Like, I know Pimlico has this permissionless JS. But does EtherSpot offer something? Or is there something else that's emerging as a way to -- Yeah.

Partha Ramanujam: We use EtherSpot Prime SDK. And our transaction kit, again, is composable. So, you could configure transaction kit to work with, say, StackUp bundler or Pimlico's bundler. Or use a different paymaster. So, everything is possible.

Nicholas: I've heard a little bit about this user op compression technology that I believe Dymo was working on. Are you thinking about that at all?

Partha Ramanujam: So, this has been -- there have been a lot of changes. There have been a lot of debates around this. And we haven't really started work on that because there's a lot of talk or confusion here which says that there are two schools of thought here. One that says that after 4844 goes live, any such compression that is built into in smart contracts might not offer the same amount of benefit. Because you are already achieving call data compression after the Denkin upgrade with. So, we chose to defer the decision on compression to see how much of benefit 4844 provides to smart accounts on L2s before we could go ahead and implement that.

Nicholas: I see. Got it. Are there any other interesting sort of developments in the 437 infrastructural landscape? That are happening that I might not be aware of. that are on your mind or things you're working on?

Partha Ramanujam: Not at the moment. The main focus at the moment, though, is to launch the mempool. And once that's live, we might start improving on that. Nothing much. I can say modular. You are already aware. I can't think of -- yeah. Anything else at the moment? No.

Nicholas: Does 4844 change anything for you? Yeah.

Partha Ramanujam: 4844 definitely would make it easier and cheaper for smart accounts, at least on L2s. Hopefully, that will increase the adoption. Let's see how things go. Right.

Nicholas: I talked to the founder of Conduit the other day, which is a roll-up as a service provider. And in their offering, you can do data availability on Celestia. I guess I'm curious if there's any kind of -- I suppose for the experience of operating in the VM, it doesn't really matter where the data availability has. But I was curious if you had any thoughts about alternative data availability solutions as they relate to, like, L2s and 4337 versus the traditional kind of L1 DA.

Partha Ramanujam: Honestly, I'm not sure. I haven't done much -- I mean, I haven't read much on a data availability layer in Celestia in particular. So I wouldn't want to comment on something that I'm not really clear on.

Nicholas: All right. Sounds good. The other thing that came out of that conversation was talk about L3s that are going live, particularly on top of Arbitrum Orbit and also Base now, and I'm sure many other chains, too. Do smart accounts make sense on L3s? I imagine so, right?

Partha Ramanujam: Yeah. I mean, accounts would make sense. But I was really interested and surprised to hear that, an Arbitrum layer on top of an Optimism layer, right? Who would have thought that?

Nicholas: Which one is that? The Orbit?

Partha Ramanujam: So the content -- I think they published it yesterday, saying that they are working on an L3 that runs on an Arbitrum stack, and Base is an Optimism stack. So you basically have an Arbitrum stack running, which commits to an Optimism stack, which in turn commits to Ethereum, which is really interesting, right?

Nicholas: Wow. I hadn't seen that yet. That's an interesting concept. Yeah. But in any case, you likely want to run Etherspot on as many chains as possible, Yes.

Partha Ramanujam: Yeah. And we are looking to make it simpler, probably for anybody who wish to use Conduit to probably have this account abstraction offered out of the box. So that's something we are looking to probably directly integrate within Conduit's thing, but we don't know. We'll see how things go.

Nicholas: One other topic that comes up often when thinking about sort of the lived experience, and I accounts in the future is cross-chain state propagation, for example, like let's say changing the signer on a smart account. What's the kind of state-of-the-art thinking on how we're going to solve that problem? Because most people are not going to remember what their settings are on different chains, and certainly not in a modular world, even worse.

Partha Ramanujam: Yeah. Vitalik wrote a blog a while back, I think a couple of weeks back. And I remember seeing somebody on Twitter telling that they had implemented a POC based on that, which was pretty interesting. They were leveraging a vocal-based tree specific to an account, and recovery of changing the key on the account gets automatically propagated to the smart accounts associated to that key on different chains. I didn't get around to understanding how the entire thing works. But the idea... It was pretty cool. So I have to look back at that paper and delve into it. But the idea is pretty good. And I'm told Soul Wallet guys have something similar with respect to recovery or key replacement. They do have a working POC, I believe. But again, it's been a while. I've not looked in detail.

Partha Ramanujam: So if you are building anything using smart accounts, you are free to give EtherSpot a try. We offer the same... I mean, the entire tech stack. And you can... Highly composable. We do not enforce any rules that if you have to use our accounts, you can use only our bundlers for that matter. You can pick and choose any part of the stack and continue to work. So if you're building a DEX, you're building an NFT marketplace or any other interesting Dapp games, we have games running on Bifrost and few other chains that use EtherSpot's infrastructure in some way or the other. So yeah, give that a try. Great.

Nicholas: We covered a lot, but is there anything that we did not yet discuss that you think might be interesting to app developers or smart contract writers or builders of any stripe who are in the EVM ecosystem? Anything that we didn't cover?

Partha Ramanujam: No, not really. I think we pretty much covered everything that we had.

Nicholas: It's so simple, really. Four, three, three, seven. All right. Partha, thank you so much for coming through and talking to the Web3 Galaxy Brain audience all about EtherSpot. I'm very excited for the PDP bundler and I think by the time this episode drops, that'll be in the wild so people can have something to go check out right away.

Partha Ramanujam: Thanks, Nicholas. Thanks for having me. It was a pleasure talking to you as well.

Nicholas: Hey, thanks for listening to this episode of Web3 Galaxy Brain. To keep up with everything Web3, follow me on Twitter @Nicholas with four leading Ns. You can find links to the topics discussed on today's episode in the show notes. Podcast feed links are available at Web3GalaxyBrain.com. Web3 Galaxy Brain airs Sunday. It airs live most Friday afternoons at 5:00 PM Eastern Time, 22:00 UTC on Twitter Spaces. I look forward to seeing you there.

Show less
Parthasarathy Ramanujam, Co-Founder & CTO of Etherspot