Web3 Galaxy Brain 🌌🧠

Web3 Galaxy Brain

DC Posch and Nalin Bhardwaj, Founders of Daimo

2 November 2023


Show more


Nicholas: Welcome to Web3 Galaxy Brain. My name is Nicholas. Each week, I sit down with some of the brightest people building Web3 to talk about what they're working on right now. My guests today are DC Posh and Nalin Bhardwaj, co-founders of Daimo. Daimo is a stablecoin-focused iOS wallet built with Passkeys and AA smart accounts. On this episode, DC, Nalin and I discuss their new P256 Verifier contract, which is an audited solidity implementation of P256 R1 verification. We discuss the ins and outs of gas-optimized on-chain P256 verification, compare their contract with the FreshCryptoLib implementation, and consider the limitations of precomputation. We cover EIP-7212, which DC and Nalin co-authored alongside the team from Clave, and discuss Daimo's exciting proposal for progressive precompiles, also known as precompile shadowing, which would allow precompiles to elegantly replace the P256 Verifier on chains where it is adopted. It was fantastic learning from DC and Nalin who are experts working at the intersection of WebAuthn, cryptography, and blockchains. I hope you enjoy the show. As always, this show is provided as entertainment and does not constitute legal, financial, or tax advice, or any form of endorsement or suggestion. Crypto has risks, and you alone are responsible for doing your research and making your own decisions. If you value Web3 Galaxy Brain and would like to support the show, please send me a tweet or DM saying why you listen and what makes Web3 Galaxy Brain special for you. I'll post the best testimonies to the show's website. Thank you. Hey, DC, how's it going?

DC Posch: Excellent. You've got me and Nalin here.

Nicholas: Oh, on the same line. Hey, Nalin. How's it going? Yeah.

Nalin Bhardwaj: Good. Nice to meet you.

Nicholas: Nice to meet you, too. Well, it's wonderful to get you both on the show. I'm really excited to talk about the new P256 Verifier and everything you're doing at Daimo. It's gonna be great.

DC Posch: Yeah. Looking forward.

Nicholas: And so, do you prefer to go by DC or Daniel?

DC Posch: Yeah, it's Dan Clemens. So people call me that or DC, either way is great.

Nicholas: Great. And how did the two of you meet in the first place?

Nalin Bhardwaj: Oh, yeah, I guess it's kind of a funny story. It was around when I guess I was getting introduced to Ethereum. And there was this program that DC and some other folks used to run called ETH University. And at the time, I guess I was a student and DC was like a mentor, helping people just understand how Ethereum, ConsenSys, ZK-SNARKs, sort of a lot of the underlying cryptography works. So I think, yeah, that's where we first met.

Nicholas: Awesome. And DC, how did you get involved in crypto in the first place?

DC Posch: Oh, man. I've been interested in it for a really long time, actually. And I've been working as an engineer for a long time at a variety of different startups. And then the backstory for ETH Uni actually is... So two good friends of mine and I run this program called Hack Lodge, where every half a year or so, we bring a lot of people together, mostly undergrads. And it's like, you know, a big house, a week-long hackathon type of situation, right? But with less time pressure, more space for like, you know, talks and going out in nature, doing a hike or something like that. So those have been super fun. Yeah. And ETH Uni kind of evolved in part out of that as a more like Ethereum focused, you know, more focused in general version. Yeah. So that's some of the backstory there. And then I guess for me personally, I've just been working on, you know, many parts of tech. But I've been interested in, you know, this idea of things that are like, permissionless and the idea of hardness and things that are... I think one of the things that's kind of a bummer about technology in general is that it has this like, centralizing gravity, if that makes sense. Right? Like it's...

Nicholas: Absolutely.

DC Posch: Easy to end up like, sort of the path of least resistance is to end up with things that are, you know, controlled by, you know, a relatively small group of people that like live in Sunnyvale or something like that. Right. And so, you know, it's something that kind of represents. the opposite of that is just like really exciting to me on an emotional level. And I think that's how I got into Ethereum.

Nicholas: It is cool to have a kind of technology that is intrinsically interested and motivated by and technologically enforcing sort of, you know, permissionlessness, decentralization, etc. Really tangling with these incentives and also the technological implication. Like you say, I mean, everything seems to end up so power agglomerated in technology because of standardization and shelling points and power loss and all this. It's interesting to try and have one where the core technology is less centralized.

DC Posch: Yeah, totally. You know, and you still have this shelling point, but now it's a shelling point. that's, you know, based on like rough consensus and like, you know, a community that shares like, you know, a small but strong set of values. It's really cool.

Nicholas: So we'll get into talking about P256 Verifier and Daimo and all this stuff in just a second. But I noticed, I guess both of you have a 0xpark connection as well, right?

Nalin Bhardwaj: Yeah, that's right. Yeah. Actually, a lot of the people running HackLodge and ETUNI had a lot of overlap with sort of the group that ended up spinning out into 0xpark. So that's also where we have our connection to 0xpark.

Nicholas: Oh, that's cool. I think I'm going to be hanging out with some of the 0xpark people in Istanbul in November. So I'm looking forward to getting to know the community a little more. HackLodge sounds cool too and ETUNI as well. I hadn't heard of those yet.

DC Posch: Yeah. Yeah, we're going to be at, you know, ZooConnect and DevConnect as well. And really looking forward to that. So we might see you there.

Nicholas: Yeah, that'd be great. So I guess we should lay the groundwork. You two are working together on this project called Daimo. So what is Daimo and how did it get its start?

DC Posch: So Daimo is a stablecoin wallet that runs on Ethereum. And the background here, we kind of came at it from two opposite sides and then converged on like, "Wow, this is a really interesting thing that we could, you know, that the time kind of feels ripe for it.". So I think the one side is, you know, you're seeing a lot of like organic adoption of stablecoins. And it's a little bit invisible, I think, if you're in the U.S. because our financial system works pretty well in the U.S. Like, you know, I mean, in crypto, there's interest in stablecoins, but like in the wider world or in the wider public, not so much. But you're seeing a lot of like, you know, really quick growth in places like Turkey and places like Argentina. And a lot of that, by the way, right now is on, you know, Tether, USDT. A lot of that is on Tron. So there's an element of it that's exciting, which is like, "Wow, there's organic product fit.". You know, people really love dollars. People really like the fact that it's permissionless and in their control. But it's also scary because it's a lot of it is on these systems that are not, you know, that sound necessarily. And so I think it's really important for Ethereum to offer like a sort of a sound and principled alternative there. Totally. The other side. Yeah. So the other opposite side is just from the technology, which is that there's been these, you know, improvements and that are really exciting. Right. I think the two big ones, the biggest ones that are sort of the unlock are account abstraction, which is a terrible name, by the way, like account abstraction, contract wallets. Yeah. Like they let you do, you know, like self-custody in a way that is more secure and a lot easier. And they let you pay transaction fees, you know, either have someone else pay them for you or like pay them in the same coin that you're already using. So you don't have to like sideload ETH in order to send USDC, for example. So that's really important for like having, you know, a quality user experience. The other side is just that rollups are getting really good. And, you know, like people have been using all of these other things because transacting on Ethereum costs five bucks. If we're in a world now where it costs five cents or maybe after 4.844 coming out in a few months, even less than that, it becomes really, you know, competitive. And I think you can build a quality product in a way that wouldn't have been possible even a year ago.

Nicholas: You mentioned about AA and smart contract wallets, that part of the advantage is that it's easier to have self-custody. What is it about switching to smart contract wallets that is more ergonomic for the average person than holding on to a private key?

Nalin Bhardwaj: Yeah, that's a great question. Yeah, I think. Yeah, it's very interesting. There's this sort of sense in which like, you know, cryptography turns like a lot of our sort of real world problems into like almost key management problems, right? Like, even, you know, going back further before wallets, right, like with PGP and a lot of the previous crypto systems, there was sort of this continual struggle of like adoption because the sort of setup of like, you know, private public key pairs was like, not very ergonomic. For example, we had like, you know, with PGP, you expected like some egalitarian people to just run key servers hosting like public keys, you expected like the user to copy their private key from like their desktop to their mobile, and all of that kind of stuff. And, you know, even with like, EOAs or like externally owned accounts on Ethereum and Bitcoin, in the past, you know, this sort of copied over, right, we have these seed phrases that users are expected to write on a piece of paper, move them around, all of that kind of stuff. So, how does account abstraction in particular play into this? There's sort of like, you know, two parts to it. You know, one is that with like modern cryptography, the sort of one thing we have learned together is like, for security, secrets should be something that are generated randomly by devices and stored on those devices in a way that, you know, they are never extractable from these devices. So, this part is actually something that, you know, with a lot of the new consumer products like modern phones or MacBooks and these kinds of devices, you know, we have these boxes called secure on plates in them now that are essentially like separate chips that do key management for you. Right. So, that is one piece of it. But sort of the other piece we would want is like, if, you know, one device owns an account, you want, you know, what happens if you lose this device or something of that sort. So, account abstraction there provides this idea of like, you know, you have the ability to add or remove device, right. So, together with these two things, it's sort of like, you know, we were able to solve this problem of like self-custody in a way that's like UX friendly, no more seed phrases or anything as well as like much more secure, you know, this is like pretty much as secure if not more than like ledgers or YubiKeys and stuff like that.

Nicholas: So, for you, the UX success and the direction that seems most likely and most promising is the passkey WebAuthn direction. Not so much the affordances that smart contract wallets and 4337 in particular allow for something like Magic Link or other kinds of solutions. where it's not, you're really focused on on-device private keys that never leave the device.

DC Posch: Exactly. So there's there's two sides there. So passkeys actually do get leave the device. passkeys are backed up, you know, either to the iCloud keychain or to Google Password Manager. This actually segues really nicely because I think we're going to talk about PT56 verifying a little bit. But yeah, I think if I had to boil it down, you know, like, legacy Ethereum accounts, you have them permanently tied to a single cryptographic key pair. You have to, like, write down the private key or something that's equivalent to it and store it safely. Really tough UX. And, you know, it was AA, like, in general, like abstracted from from from Daimo. specifically, AA wallets in general have this really nice flexibility that you can add new like keys and you can remove keys and and you can. it gives you like a much bigger product space for like how you want to structure your self custody that that lets you make it easier for people. And like the thing you're talking about with like Magic Link and things like that, that's like another like, you know, like part of that that space.

Nicholas: So overall, just moving the account into a smart contract wallet is just a allows you for more flexibility in what the relationship is with the user so they can use something like a ledger or something private key where they need to write it on a piece of paper banging into metal or something or all the way to like totally centralized solution. But just AA is enabling that. And that's that's what you see is the value in AA.

DC Posch: Exactly. And so I can see what we do for Daimo specifically. Yeah. So for for Daimo specifically, we're going to have passkeys as a default option, which is, you know, you have a key that signs for your account that gets backed up to Apple or gets backed up to Google. And that has some really nice properties. where it's like if you just install this app, you only have it on one phone, you lose the phone, your money is not gone, you can like go and restore it from that backup, right. And then as an option, and this is very analogous, actually, to something that Apple recently shipped with, they call it iCloud, like, you know, advanced data protection. Or for like, you know, more serious users who opt into it, you can use Enclave keys, which are different from passkeys. And there are things that are hardware locked and never leave your device. And what's really cool there, and I think this is like not widely known or appreciated yet, but modern phones, so like recent, you know, Pixel or recent iPhone, have something that is a lot like a ledger already built into the phone, and no one's using it yet. On Apple calls with the secure Enclave. And yeah, it has like the ability to generate a key that's guaranteed to like never leave the device, never get backed up like a passkey or anything like that. It's like a local, like, you know, purely self custody thing.

Nicholas: I guess your comparison to ledger is very apt, because my sense is that with the secure Enclave, although people tend to give a lot of credence to the security of the thing, it's not really open source hardware, we don't have a lot of optics into how the software works. And yet people in general seem to think that it's a pretty good solution. So it is kind of more like ledger than Trezor, for example.

DC Posch: Yeah, I think that is fair. I think one thing I would say is that, in general, like, you know, Apple's like security and privacy is like, pretty well respected, like, in among people who are like, deep in that space. And I think one concrete advantage that it has over something like ledger is ledger has upgradeable firmware, whereas the secure Enclave doesn't. So it's like, and there's there's a reason for that. It's not that ledger is doing it wrong. It's just that ledger is designed to support like a open ended list of like, crypto protocols. And so you can install those little apps on ledger, you have like the Bitcoin app, the Ethereum app, you can install a Zcash app, whereas secure Enclave has like a very small list of operations that it supports. You can tell the secure Enclave generate me a P256 key and make sure, you know, that it never leaves the device. That key is only available through like, you know, face ID or pin and stuff. And the secure Enclave, by the way, is responsible for like that as well. It's responsible for like user presence. And then, you know, sign messages right with it. And when you ask it to sign a message, it will like do the user presence check. And so it's this very like sort of single purpose piece of hardware that doesn't have any upgradability built into it. And so the upgradability has to come from the other side. So it's not that like, you know, you're adding support for, you know, Ethereum inside the secure Enclave. It's that you're using account abstraction on the Ethereum side to add support for P256 keys that the Enclave already natively supports.

Nicholas: Right. I've heard a few conflicting things about this. I mean, one of the reasons that Ledger needs to update their firmware is because, as you say, they might change, you know, maybe EVM changes the elliptic curve that it's using or adds a new type of encryption or they want to support some chain that has a different kind of encryption. So you need to be able to update the firmware for the Ledger for the same Ledger to be able to support the new chain, new fork, etc. But I've also heard on a previous guest, Leo Lunesu, was mentioning that there's nothing intrinsic to the hardware of the secure Enclave that restricts what it can sign. It's only Apple's software limitations. I guess, do you know any details about what, like, they really don't update the firmware for the secure Enclave ever and you just need to buy a new phone if you want to sign something new? So we're like, what are the primitives that really we are stuck with, with a given secure Enclave? I guess R1, P256 is kind of the way forward.

Nalin Bhardwaj: Right. Yeah. So the way secure Enclave works in particular with iPhones and many other phones is that it's separated out into its own chip. So any software update on iOS, for example, can never update that chip's firmware, right? So there is some bridge between these two devices or these two, like, chipsets, essentially, like the secure Enclave chip is separate with its own RAM, own CPU, all of that, and your main memory or your main device, user space apps. And this bridge is like restricted to just being allowed to sign, verify, encrypt, decrypt those like few operations on the same like P256 set of keys.

DC Posch: In fairness, I do think, I think Nikos has a point where about, you know, like the secure Enclave does run like closed source Apple software. That is true. And so there's an element of trust there just as there is an element of trust, like with you know, the actual hardware manufacturing and everything. But yeah, like, I do think that like Apple security team has like a really excellent track record there, like, you know, of things that you could be trusting. I think it's like, better than most. And it's like, not something that will ever touch like Apple servers. It's non custodial in the sense that it's like in your possession, the way that say a ledger would be.

Nicholas: I guess what scares me a little bit, and I think for 99% of people, this doesn't matter because they're already, me included, so bought into commodity software and hardware that, you know, maybe if you're Satoshi, maybe don't store all your Bitcoin in a passkey wallet. But it does make me wonder if, I mean, is it possible for Apple to change some of the logic that interacts with the secure Enclave to have it sign using different types of encryption or really, like, I mean, I understand what you're saying about the burden needs to be put on the EVM to adapt to what the secure Enclave does, because it doesn't have firmware upgrades. But I'm curious, because the whole logic of the ledger upgrades is that they need to be able to update the firmware in order to get you new kinds of encryption. So we're like, the secure Enclaves that are out there in the world are never going to support new types of encryption. Is that what I'm understanding?

DC Posch: Yeah, I think that's right. I mean, maybe in like future releases, like in like future iPhone models or something like that. But yeah, the primitive that it gives you for signing is P256. I believe it also supports 25519, but I'm not sure. But yeah. Okay, yeah. But it's a small fixed set of operations that it supports. And I think that that's actually a good thing.

DC Posch: Yeah. I mean, so I think one point that you made there that's really good, it's like, generally like my least favorite thing about the Apple ecosystem, like they do, you know, a lot of like excellent design and product design and engineering and security engineering. But they do have this like very like, you know, proprietary attitude about everything. And it's like, you know, trust don't verify in some ways, right? So I think one place where you would see that very concretely is like, when you publish an app in the App Store, there's actually by design, no way for like an end user to verify that they're running like a particular binary. And one place where you can see that, for example, is so one of the projects that I respect the most in the space is kind of like, one of the things that inspired us to do this is Signal. Or, you know, I think they did like some really great, you know, cryptography security engineering and they delivered it packaged in like a product that's super easy to use and clean and like, not a like for nerds by nerds kind of thing, right? And one of the things they did that was really cool is they did reproducible builds. So they did a thing where it's like, hey, anyone can verify for themselves that what you're running is exactly, you know, what we published on GitHub. And that like the security community is like reviewed and likes. And that only works on Android, though. They did reproducible builds for Android because on Android, like you can actually check like, you know, the hash of an APK that's installed and be like, yep, that's the one. You know, the App Store like doesn't actually give you that like Apple reserves the right to like, you know, package and repackage and like, you know, modify like the actual binaries that they're distributing to the phones as they wish. It's like you submit to the App Store and like they do, you know, a review process and then they're they own distribution.

Nicholas: So so at least although there's a broad sort of respect for Apple's security force, nevertheless, there is kind of some lack of visibility into some details to be really sure that everything is exactly as they as secure as they say it is. So totally still, it's it's. yeah, it seems like for most people, it'll be it'll still be the best option. It's basically the consensus amongst people I I'm talking to about passkeys. People seem to think, you know, if you if you have an iPhone, if you have a Mac and you have most of your stuff in a password manager, you be it iCloud keychain or something else, you're. you're pretty much bought into this similar set of risks as if you have your private key for a hot wallet inside of iCloud keychain via passkeys or a secure enclave key.

DC Posch: Yeah, the passkeys and the secure enclave key are a little different. Like one of those is a cloud backup and one of those is guaranteed local.

Nicholas: But in either case, you're trusting the secure enclave.

Nalin Bhardwaj: So in the case of the iCloud keychain, the passkeys actually get backed up. So they are at some points, you know, the private keys are in memory. So the trust model is slightly different than that.

Nicholas: Oh, I see. Okay. But I thought the idea was that they're end-to-end encrypted iCloud keychain, so it's not any worse than the encryption is happening in the secure enclave, even though they're stored on the cloud.

DC Posch: I think that that might be true if you're opted into iCloud advanced data protection thing I was talking about earlier. But for 99% of users, that's not the case. And the thing that is actually really nice about it not being the case is, you know, say you have a phone, you lose the phone or it gets stolen, you get a new one. Like if basically if you're able to like still use passkeys after like, you know, losing your old device, not having any like written down backup or anything like that and getting a new device, then that's like, that shows conclusively that like Apple had access to it on their servers.

Nicholas: Although there is this advanced protection thing you mentioned, which in which case Apple throws away their keys, which is nicely paired with the recovery contacts feature. So you can add anyone who has an iCloud account who can recover your account to a device that was previously logged in as you, as long as it's like an Apple device. So I guess probably best not to do one without the other.

DC Posch: Yeah, I mean, so abstracting out of like this thing that we were just talking about, though, I think if you look empirically at like, you know, people losing money in self custody on Ethereum, I think the vast majority of it so far is one like people like losing just like losing keys that they had, like, oh, they like wrote down a seed phrase, but they lost it or they didn't write down a seed phrase. and then like they lost the device that the wallet was on that kind of thing. And then on the other side, you know, like phishing and hacks and like people putting their seed phrases in places where it wouldn't be or shouldn't be in things like that. So it's like, I think the vast majority of it is those two things. And I think that task keys actually do like a pretty excellent job of protecting you from those two majority risks. So it's like if you have, you know, a task key based wallet, there is no seed phrase for you to like accidentally put in the wrong place, or for someone else to like find and take a picture of. And there's also nothing for you to like lose or forget.

Nicholas: Right. I do think it's a great compromise for most people. I think it was Dan Romero from Farcaster last week, when when Warpcast dropped their pass keys, large blob implementation, saying that it's really the best answer for 99.9% of people based on his experience working in Coinbase for so many years, and sort of perceiving all the troubles related to crypto and popular adoption. So I tend to agree with that perspective. So I wanted to ask you about P256, the R1 curve and verification on-chain. What is, maybe we've implied it already a little bit and obviously talked about it on other episodes of this show. But for people who haven't heard about it and don't know exactly what the story is, why do we want P256 verification on-chain?

Nalin Bhardwaj: Yeah, totally. Yeah. So I guess the sort of TL;DR is that, you know, Ethereum, Bitcoin chose this P256 K1 curve for their keepers. So like the normal externally on the cons on Ethereum, they use this ECDSA or the curve-based signature scheme that uses one particular curve, and it turns out the rest of the world sort of picked a different one. So Secure Enclaves, FastKeys, all like UBKeys, the Fido Alliance, all of those keys sort of are in this different curve. They're also performing ECDSA, so the same signature scheme, but they chose to pick a different curve. And, you know, we want compatibility between these two. So being able to use, you know, UBKeys or any of these other keys on Ethereum is the sort of goal for having a P256 verifier.

Nicholas: Makes sense. And in this recent blog post that you put up describing the new project that we'll get into in a second, you describe a few different ways for approaches people are taking to getting the FastKey, or I guess WebAuthn more generally, signatures on-chain. Maybe do you want to run through what the different options are and why you ultimately decided that P256 on-chain verification is like the way to go?

Nalin Bhardwaj: Sure. Yeah, I guess the goal of the blog post was to describe some of the different ways you can verify P256 signatures on-chain. So there's sort of like, you know, last few months, there have been a few other teams as well who have been looking into this problem. And the one approach is using a smart contract, right, which is the sort of approach we took. Within that, there's some details you can get into later, but the other two approaches, so one is, you know, using ZK-SNARKs. So with SNARKs, you know, we can sort of offload computation off-chain and just provide a proof of the computation on-chain. So that is one approach. We can get into reasons why that is sort of not the most ideal one for this particular use case.

Nicholas: So there's a bunch, there's actually a great summary. Where is it? It's on doganeth_en Twitter account, who's an engineer researcher from Clave, collected a bunch of different comparisons of all the different techniques people have tried up until September 21st, when that post was made, at least. You can see the comparison of the gas costs and other things between a variety of the options that you're about to describe. So what is it about the ZK-SNARK-based verifiers that you think is suboptimal or maybe not appropriate, at least right now?

Nalin Bhardwaj: Right. Yeah, that's a great question. So, as one piece, you know, the ECDSA signature verification is something that is relatively simple, and compared to like even the machinery that's necessary to verify ZK-SNARKs, so ZK-SNARKs are, you know, these complicated objects where you're verifying like bearings and sort of very complex math, compared to that, you know, signature verification is quite cheaper or quite lighter. The sort of caveat there is that, you know, EVM is a very special place to run computation. And on EVM, things that may be, you know, usually cheap on like RISC-V CPUs or just regular hardware, don't always end up being the same cost or even comparatively this similar cost. Right. So that was why people originally thought maybe SNARK-based approaches are the right way to do it for the EVM. But it turns out, you know, with sort of the gas trade-off we see now, you know, I know of two SNARK-based implementations which both cost slightly more than what our smart contract-based verifier costs. So that is one end on the verifier side. But even, you know, sort of backing up one step, on the user side, when somebody submits a signature, the ZK-SNARK-based approach requires that you first generate the ZK-proof or SNARK-proof, I should say. So to generate the SNARK-proof, you know, usually the setup that is necessary is like, you have some beefy server, it takes like maybe 10 seconds or something to actually generate that proof. So that creates, you know, latency for users before they can even, you know, go on-chain with their proof. Whereas, you know, the smart contract-based approach where, you know, as soon as you face ID or something, it gives you, your device gives you a signature, and you can go straight on-chain with that.

Nicholas: Right. And if people want to see what that looks like, the SNARK-based approach, or at least the Halo 2 variant of it, noseedphrases.xyz has a demo of passkeys with Halo 2-based signature verification. But so basically, you're saying it's a slower to sign or to do this other off-chain computation in order to get the SNARK. And then it's more gas costly once you're on-chain to verify it. If you go with the ZK approach.

Nalin Bhardwaj: Exactly. That's right.

Nicholas: Okay. And then in the blog post, which is the first audited P256 verifier is the title from October 9th, 2023. And I'll put the link in the show notes for the podcast version of this. You go in then to another approach, the smart contract verifier approach. And you talk a little bit about this Ledger implementation. I'm curious about Renaud Dubois' approach. And well, I'll ask him some detailed questions on it, maybe once you give a little summary of what's going on with Ledger and this on-chain verification approach.

Nalin Bhardwaj: Totally. I mean, so I guess I should just say, like, his implementation is brilliant. And that's where a lot of our implementation basis started from as well. And I mean, they did a brilliant job of like writing a verifier that costs, I believe around 270k gas to verify like the average signature. In comparison, you know, one thing that we sort of, we took a lot of, you know, the underlying sort of algebraic and mathematical tricks from their implementation, but we ended up swapping out to some different code or, well, our own code based on implementations from Superannational, of Blast and some other sort of more, I guess, one thing I would say the implementation from Renaud was different on is like, it used a lot of assembly and a lot of like sort of low level memory tricks to like squeeze out as much performance as possible. Of course, that's great for gas efficiency, but that also makes the job for like auditing or reviewing the code much more difficult. And so for such an important primitive, it seemed to us important that, you know, we've for instance, like ourselves really deeply understand it. And funnily enough, you know, we ended up finding some minor issues in Renaud's implementation as well that we ended up emailing him about in this exercise. So all to say, you know, like we took a lot of inspiration of the about the tricks, but for the actual engineering side, we sort of swapped out to something that's more, you know, no, no unchecked code, no assembly code, no like reverts, nothing of that sort. So it's somewhat cleaner.

Nicholas: Yeah, I'm interested in that decision. Because it seems like I mean, this is something that's going to get called every single time someone executes a user op on an account abstraction wallet. So it's going to be very frequently used. Do you think long term that it'll make sense to stay in pure solidity and not dip into assembly at all? Or do you think that over time, once this audited code is sort of more frequently used, we'll eventually move to something that is more like Renaud's more sort of in the nuts and bolts of the thing, optimization path.

DC Posch: So I think that the gap in gas costs is pretty small, actually. It's like 270k gas versus like 320. One thing I would say is that I think a lot of people listening to this might already know, but just to make sure. So when you're doing transactions on a roll up, your costs are dominated by L1 data costs. And so for example, for like Dymo transactions, they're costing, you know, about 5 or 10 cents now. And, you know, less than one cent of that is the actual computation gas on L2. And so like getting that little part down by like another, you know, 10% or so, I don't know if it moves the needle that much. Because yeah, the amount of data that you're putting through that L1, currently it's L1 call data, very soon it's going to be L1 blob data is the same. And that's where most of your cost comes from.

Nicholas: Interesting. So if you can shrink the amount of data you need to write to L1, then that's really where the majority of the gas savings are.

DC Posch: Precisely. And that's something I think, you know, where like, so the one thing that I would see long term, it's not so much about like micro-optimizing the P256 verifier. The big thing is eventually if we have, you know, really like significant scale, it might be worth doing a like, like reconsidering, you know, a ZK based approach. Because what ZK lets you do is it lets you like do signature aggregation, which actually does save you some of the, you know, expensive resource, which is that L1 data space.

Nicholas: Very interesting. Right. I am seeing in Dogen's summary table that they claim that the, is it FreshCryptoLib library? That it's FreshCryptoLib FCL, the Renault implementation that it's a 205k.

DC Posch: You're going to say 200k gas? Yeah.

Nicholas: Is it not?

DC Posch: It's funny. So here's the thing. As far as we can tell, that number comes from averaging over the witchproof test vectors. So the witchproof vectors are excellent. We're also using those for our test suite. The thing though, is because it's test vectors, they have a mix of valid and invalid signatures. And the invalid signatures can often be rejected very fast and like, you know, a couple thousand gas because it's like a fail fast thing, right? It's like they, you know, you start going through the signature verification algorithm and something fails right at the beginning. And so averaging those in brings your average down. But I don't think that that's representative because like when you're transacting on chain, you're always verifying valid signatures to do that. And for that, the FCL implementation is 270k gas.

Nicholas: That's a good point. Maybe the real trick is to just never send a transaction that completes. That's the most gas efficient.

DC Posch: Yes, exactly. Reverting, reverting transactions are very cheap.

Nicholas: Now we're starting to get into Ordinal's inscriptions territory, I'm getting excited. So you mentioned this witchproof test suite. I had never heard of that before. Can you explain a little bit, what is that?

DC Posch: So witchproof is cool. It's a Google project, actually, Google project witchproof. And what it is, is it's a relatively comprehensive set of what are called test vectors for a whole range of cryptography primitives. So they have test vectors for p256 signatures, but they have test vectors for a whole bunch of other things as well. And what's really great about that is people are like all kinds of projects around the world that are implementing a cryptographic primitive, whether it's on chain or in hardware or in software or anywhere. You know, you have this common set of test cases that everyone uses to ensure that their implementations are correct and that they're all compatible with each other.

Nicholas: That's very cool. There's some other like test things that you mentioned and I mean, it's a very technical post actually, there's a lot going on here and you've rewritten. So basically, you've rewritten this fresh crypto lib in a more legible way. And in the course of writing this blog post, you talk about the Strauss-Shamir trick, the Jacobian representation, point at infinity. I don't know if we should run through all of these in detail or maybe is there one that you think is particularly interesting or some kind of optimization that's worth mentioning here?

Nalin Bhardwaj: Yeah, I mean, the blog goes into a lot of details about the underlying math. But I think the TLDR is that cryptography has been, or cryptographic primitives on CPUs have been optimized a lot. So sort of the key idea that, for example, Renaud had as well as what we're depending on is using a lot of the tried and tested tricks on CPUs, but in this new environment of EVM. So some of the cost radars look different. So sometimes, you know, you would pick or do things that might not make sense on CPUs or actual hardware CPUs, I mean. But they do make sense for a gas cost perspective in the EVM environment.

Nicholas: Yeah, also, I guess maybe Strauss-Shamir trick is not even one of those that fit into what you just said, but just doing order operations such that it fits the computational expense of different kinds of operations. Maybe that is an EVM optimization also, actually, but writing it specific for the VM you're running on.

Nalin Bhardwaj: Yeah, precisely.

Nicholas: Yeah, I guess if people are interested in that, they can go to it because I won't belabor too much the details here. But there's lots of interesting links to deep cryptography concepts that are worth checking out and great rabbit holes. So overall, we're talking about this second technique of the three that you discuss in the blog post, which is the smart contract verifiers. There was one other thing and maybe a more significant gas savings, potentially, that is mentioned in passing, but not dived into in the blog post, which is pre-computation. Can you explain? because there is this and on previous shows, people have been listening, may have heard guests and me make reference to this 69K or 70K gas ledger contract, which is a pre-computed version of the Renault contract we were talking about a moment ago from what I understand. So can you explain what is pre-computation and why have you chosen not to do it?

Nalin Bhardwaj: Right. Totally. Yeah. So let's see. So pre-computation. So the key idea of the pre-computation is that the public key that you're often going to be verified with is going to remain the same, right? So you will have many, many signatures, but if they all come from the same public key, then the verifier algorithm. actually, there's a lot of pieces that remain basically the same. So the trick that you can do is you can pre-compute a lot of the sort of elliptic curve map that involves the public key in particular, and write it in storage in Solidity. So when you later go and do a signature verifier, you don't actually need to run that compute again. You're just like reading from storage. So that is, I guess, sort of TL;DR on what the pre-computation itself is. Maybe the sort of key part to note as like a sort of application or wallet dev is that it's like specific to the particular key. And that means that, you know, like every time you want to add a new key to your wallet or something of that sort, you would have to run this computation again. And then the other part to note is that, you know, like the storage actually has a pretty massive cost. So even though the actual signature verifier is only 70k gas, if I recall correctly, the pre-computation takes storage worth of like 3 million gas or something around that.

Nicholas: At least in this table I'm looking at, it's 1 million gas for the without pre-computation version and let's say 270k gas for the verification. And in the pre-computation version, it's 3.2 times as much. So 3.2 million gas to deploy and around 70k to verify. So significantly more. It's interesting what you say. I was thinking you'd have to do it once per AA deployment, but you're right. It's per signer. So if you plan on having multiple signers for different, like say Android and iOS ecosystem, two signers on the same AA, you're going to have to deploy 3.2 million gas worth of contract each time you add a signer, which is pretty hefty.

Nalin Bhardwaj: And I mean, the other piece to notice that because this is storage, this is not, I guess, reduced as much by the L2 paradigm. So it's sort of the less desirable part of the gas you would want.

Nicholas: Because the majority of that gas gets carried through as call data or blob space to L1 even?

Nalin Bhardwaj: I believe a decent chunk does, right? Because you have to store the mercalization or I might be wrong about the details. But yeah, there's definitely some amount of overhead that the network will bear.

Nicholas: Very interesting. Okay. So we're skipping pre-computation. Is that like a sort of consensus view amongst people that pre-computation is not the direction or is this very much up for debate in the community?

DC Posch: I mean, I think a lot of it goes back to what we're saying before about, you know, the L2 computation. gas is actually a tiny part of our overall transaction costs. And so I think our main goal with P256 Verifier is actually not to like gas minimize it. It's just to have like a really clean canonical version that we can have a really high degree of confidence that it's correct and can get it audited. Because yeah. And I mean, like, if the gas cost is really excessive, at some point, it becomes annoying because of just like throughput issues. I think, you know, yeah, between like the current, you know, 300k gas and like the, you know, 80k or so that it would be with pre-computation, like, I don't think it would make our transaction costs much cheaper in the short term. There is one thing, one other like so important. reason why we didn't do pre-computation is so that we could have something that is a pure function that acts as a drop-in replacement for the proposed pre-compile. So there's a, and you actually, you mentioned Dogon earlier. So, Dogon and Oolash from Clave, plus Nolan and I have, you know, a proposal out called EIP-7212. That's a pre-compile for P256 verification. And so what we've done with the P256 Verifier contract is made it like a equivalent interface so that it's a drop-in replacement for that.

Nicholas: I think this, what you're about to get at is so cool. I'm extremely excited about the pre-compile shadowing concept, or I think you have a different name for it in the blog post, but so we'll get to that in just a second. So for people who are interested in the 7212 EIP, there is another episode with Oolash and a bunch of other people, Jerome, and a lot of different people on that episode. That's great. So we'll get a deep dive into 7212, but this is also the third option presented in the blog post. And just to be clear for anyone listening who's maybe not as familiar with this stuff, earlier we were talking about pre-computation, per address pre-computation, and now we're talking about pre-compiles, which are a completely different subject, although the words sound a little bit similar. And so for the pre-compile, just to give a little bit of a summary, what this would essentially do is put this R1 or P256 verification, basically letting you check that a signature from a passkey or a WebAuthn signature on this R1 curve is valid on-chain, but instead of resorting to a smart contract that someone like these two fine guests today deploy manually, it would be something that would be built into the EVM itself and could thus have both be executed on the bare metal, not within the EVM, but in the node software, which could be written in Go or Rust or whatever. So it can be very, very fast. And also the gas price can be set arbitrarily. And so in the 7212 EIP, you're proposing 3,450 gas, which is obviously a huge savings over any of these proposed numbers, even the pre-computed per address contracts. So that would be very interesting, of course.

DC Posch: Yeah. Yeah. No, we're really excited about that. I mean, so one thing I can say, and this is a bit of a subtle point, but let me talk about very briefly about L2 costs and L2 throughput. So in the short term, medium term, even going from the 300k gas that our on-chain verifier costs to the bit more than 3,000 gas that the precompile would cost, even that wouldn't actually reduce our transaction costs significantly. Because the majority of our transaction costs are L1 data, which is unaffected. And the small part of our transaction cost is that L2 computation, which is what we would be, you know, reducing by 99%, right? So in the short term, it doesn't make it cheaper. So why do we care? So over time, as the popularity of rollups grows, and as like more and more activity moves on there, you end up running into some limits. And fortunately, those limits are going to be higher than they are on L1. The point of rollups in general is to, you know, offload like computation in a way that's like off-chain from the perspective of L1. And then just like, you know, use L1 for data availability and like, you know, settlement. But you still have like limits that are sort of defined by, you know, like software engineering, right? Like, it's not necessarily a goal on a rollup that like a home validator can run a rollup node. Right? So you're not like super constrained the way that you are in L1. But there is going to be some point where it's like, okay, you know, so L1, as many here know, processes about, you know, a million gas a second. You know, maybe L2 will do 10 million or 100 million, but it won't do like infinity, right? And so once you start like running into the limits of that, due to like increased usage, then there could come a point where like, you know, the price of computation on L2 no longer rounds to free, right? And like, you can have that limit be much further out and, you know, the like overall like throughput. I hate to use the word TPS because TPS is like the favorite metric of like really silly projects. But like, yeah, if you want to be able to do like, you know, a global payment system supports like that level of throughput, then the precompile is like a really nice medium to long term unlock for that.

Nicholas: So now we can start to get into this discussion of the precompile shadowing, which is, I mean, it's so cool. You've got this post on Ethereum Magicians, talking about it. Can you explain how we could go from the smart contract version to the precompile version trustlessly?

DC Posch: Yeah, totally. So traditionally, the way precompiles work is that they use special low numbered addresses. So for example, you know, address 0x0000005 is the modular exponentiation precompile. What was that?

Nalin Bhardwaj: Inverse, I think.

Nicholas: Oh, okay. Sorry.

Nalin Bhardwaj: Yeah, might be wrong.

DC Posch: Well, it's an early precompile that has like a specific function. And the way that works is, you know, you're calling into that address as if there was a contract there, but it's just a like locally implemented thing that is part of the consensus protocol.

Nicholas: So basically, it's part of the node software. It's not it's not. there's no actual contract at that address. It's just that the node software, all the node softwares agree that they'll interpret it in a specific way and execute a specific computation if that contract is called.

DC Posch: Exactly. So like, you know, tiny bit more like context there is people don't want to just like keep adding opcodes to the EVM. So instead of doing that, you know, like for some of these, you know, higher level things like oh, compute this hash function, compute this, you know, like signature verification where you know, Ethereum wants to add or like wanted many years ago to add like, you know, special support for it, but like, you know, didn't, but it's like way more than you would normally do in like, you know, a single opcode, right? They may get a precompile and you know, it's just like a special, you know, low numbered address. that is like a stubbed contract in effect. So the idea of progressive precompiles is as follows. So we have this cool new facility as of the last couple of years called create2 that lets you create a contract at a deterministic address where I can say like, okay, you know, to oversimplify slightly it's like the address is like a hash of the code. It's like here's code that implements this specific pure function and it will always exist on every EVM chain at address such and such. that's like, you know, just a pure function of that code. And so, you know, we're using create2 for deploying the p256 verifier. A lot of other, you know, folks are using it for deploying like, you know, pure functions to the chain and most account abstraction wallets are create2 addresses because it lets you keep this really nice property that like legacy addresses used to have where they automatically work on all chains or at least you have the possibility to. Yeah, that's create2.

Nicholas: So just one little thing before you continue, which is for people who aren't as familiar with create2, basically what it lets you do is determine at what address a contract will be deployed without deploying it. So in advance, you can know the address to which you'll be deploying, which is how you get that property of knowing what address a contract will be deployed at on various chains. So it's a very cool property that's used all over the place. Counterfactual deployment is super useful. It lets you do things with A, for example, like even generating a passkey and knowing what address that will correspond to for a smart contract wallet without having to deploy it, which lets you have a user onboarding and doing things from an address where maybe they as long as until they need to propagate something on chain, they actually don't need to have their contract wallet deployed yet. So this is a cool property that's used in what you're about to explain.

DC Posch: Yeah, exactly. No, that's exactly right. So basically what this idea of progressive precompiles is putting two plus two together on precompiles and create2 contracts. It's hey, you know, we want to implement like a new pure function. What if we do it as a create2 contract first, and then it exists on all chains, everyone can use it, it works, it just costs a lot of gas. And we also propose it as a precompile at that same address. instead of at like, you know, 0x19, or 20, or 21, or whatever the next low numbered address, does we put it over the top of that create2 address. And so what happens is, as different EVM chains adopt that, you know, precompile, maybe at different times, everything that's using that function just continues working. So all of the other contracts that are calling into that, they just become more gas efficient. And so it's, you know, it has this nice, smooth deployment property. Overall, if I were to zoom out, the goal of progressive precompiles is to create a, you know, smoother and more sort of realistic way of implementing new precompiles, you know, like, now that we have this world where there's a whole bunch of EVM chains out there.

Nicholas: I think this is super, and I'm very excited about R1. And I think this is even maybe more interesting than R1 overall, just in terms of EVM nerdery. Because as you explain in the blog post, and as many people are familiar, you know, a big reason that these optimistic roll ups have become popular is because they achieved EVM equivalence faster than anybody else, pretty much, even are maybe more legitimate in terms of their EVM equivalence than of their fraud proofs and the legitimacy of their L2 infrastructure a lot of the time, currently, at least. So there is a sort of disincentive for them to deviate, and for any new chains to deviate from EVM equivalence, because who knows, you know, like, push, what is it, push zero came out recently, and it exists in some EVM chains and not in others. And this is causing, you know, basically a fracturing of code bases, if you're a solidity developer, between which, you know, if you're going to have the exact same behavior on different chains, you now need to think about it. if the chains have different precompiles, for example, or other affordances that are different. So there's a bit of a sort of conservative pressure on chains that have just achieved some kind of product market fit with their EVM equivalence to then go and deviate to enable, you know, broader adoption of Ethereum through WebAuthn based passkey signing on AA. Well, there's a reason for them to not adopt 7212 in advance of L1, which was kind of the original pitch for 7212, as I understand it is that it's really designed for L2s to pick it up first. So what this progressive precompiles strategy allows is for people to just switch very elegantly and in a really trustless way. One detail that I didn't understand until you just explained it now is that you would intend to put the precompile at the address, the create to address of the existing contract, rather than having some kind of on chain, you know, upgrade function that would be deterministic and trustless. Instead, they don't even need to change the address they're calling out to. That's very clever as well.

DC Posch: Yeah. And I mean, the cool thing is that a create to address is basically, you know, in the same way as like an EOA. address is a, you know, commitment to a specific, you know, public private keeper, a create to address is a commitment to, you know, a specific contract. And if you change even one byte of the bytecode, you get a completely different create to address. And so if you have a contract that is well understood by the community is audited is like known to implement a particular pure function correctly, then the create to address is like a really nice representation of that function. And, you know, it's like callable at high gas cost everywhere. And then if you end up shipping it as a precompiled, it's callable at low gas cost in certain chains and eventually everywhere.

Nicholas: So it does depend on the precompile, like for example, 7212 being exactly the same contract as the smart contract deployed previously being used by AA wallets, for example. So you really can't change anything.

DC Posch: Yes, precisely. And so one thing that we've done to facilitate that we're, you know, co-authors on 7212. now we've edited it a little bit to match the NIST spec exactly. So the 7212 as it's currently up there on like the EIPs page matches like the NIST P256 spec exactly. And then we did the same with P256 verifier. And those which proof vectors that we talked about earlier are a good representation of the P256 spec. And so like I think with the contract being tested against those vectors and future, you know, per client implementations of the precompile being tested against those same vectors, I think we can get to a high degree of confidence that they're all implementing exactly the same thing. And yeah, and the P256 spec is like that thing that is like really widely out there. So it's also, you know, like an exact match for what's on YubiKey, Secure Enclave, so on, so on.

Nicholas: So I mean, we've talked about it, and I think you've very astutely pointed out several times that saving gas on L2, while interesting, is not really the most important thing to reducing the cost to the end user or to whoever it is who's paying the gas, because the L1 cost of the call data or blob space. However, one thing that is sort of implicit in this is that if the 7212 spec and the smart contract in the meantime, that the P256 verifier that you're deploying can't change, then we can't later on implement some of the optimizations from Renault's version, for example. So we would be stuck with like a pure Solidity implementation if we were to do this kind of progressive precompile strategy over the contract that you've deployed this week.

DC Posch: That's correct. Yes. So and I mean, like, it doesn't prevent certainly other contracts from using like, you know, a P256 verifier that say does do precomputation, they just won't get like auto opted into. like, you know, the precompile wants to precompile Lance. In order for this kind of like progressive precompile idea to be possible, the function has to be like a single global pure function, if that makes sense. So it's like a function that like, you know, takes certain inputs and always returns the same output and not is not like a per account thing or anything like that.

Nicholas: Right, right. So it is. it is really like a library. I mean, not in the Solidity sense, but it is. it is just a pure function on chain. So there's no no storage, no nothing, no opportunity for really just the simplest version of this thing. And I guess the fact that it's the Solidity implementation couldn't change between the smart contract deploy version and the precompile doesn't really matter because once you get to the precompile, you're executing on bare metal anyway, and it's up to the nodes to decide how to implement rather than like follow some Solidity recipe, right?

DC Posch: Precisely, yes.

Nicholas: So it's not so bad. So so it may make a lot of sense then to have the most secure, readable, auditable version today. And then we instead of trying to squeeze gas savings out of precomputations, etc, we just jump to do it on the bare metal. That makes a lot of sense. Actually, that's that's very clean. I'm very impressed by that.

DC Posch: That's hard thinking about it. Yeah.

Nicholas: Awesome. One of the things that you stress in the blog post and the announcement of the P256 verifier is the audit by, is it Veradice? Maybe you could explain a little bit about what it was like working with them and why you think it's important to have really audited code available for this.

Nalin Bhardwaj: Yeah, totally. I mean, there have definitely been, you know, like a number of implementations of different like smart contract verifiers as well as like SNARK based ones like the one you mentioned earlier. But this sort of piece, all of these have been lacking is like an audit, right? And one reason, you know, auditing is really important for a primitive like this is like, you know, you go and use this in account traction wallets. This is the sort of decider of or this is the owner of the custody, right? So if like a false signature passes through, then it means that, you know, somebody else can transact on a user's behalf, right? And that would be really bad. So this is sort of for all like account attraction wallets in particular, getting this to be like an important audited and widely accepted security primitive is quite important.

DC Posch: I would say also, in general, I think that, you know, getting a security audit is table stakes for any kind of main net contract that is doing dealing with real money.

Nicholas: Yeah, absolutely. Especially something as important as this one. It's not not just a little NFT contract. This is potentially going to be a very popular contract to interact with on a variety of chains. I had one question from the audience. EVM Brahman wanted to know, I'm just going to read this out. They would like to know if by

DC Posch: the way,

Nicholas: they would like to know if the Dymo team have or are working on a public key specific precompile contract, similar to the one in the fresh CryptoLib implementation, which increases the initial account creation cost with I guess we already went through this basically just a precomputation thing. Actually, EVM Brahman is a contributor to the fresh CryptoLib. So maybe a little bit of knowledge inside on the advantages and disadvantages of precomputation. But I guess we already really addressed this question.

DC Posch: Nice. So I would say one thing that no one the precomputation, but one really great thing that I think we are going to do, and this is like not a replacement to but it's like an in addition to the P256 verifier. So you have to do a little bit of additional wrapper processing in order to deal with passkey and WebAuthn signatures. And the reason for that is because the way passkeys work is. they have this base64 encoding step of the inputs before it gets signed. And so you have to have the ability to like base64 encode on chain and you know, like handle all that envelope verification correctly. And so fresh CryptoLib has something like this, which is, you know, a wrapper around their P256 verifier. And we're working on our version of that. I think it's going to be coming out relatively soon.

Nicholas: Great. The, what was I going to ask you about this? Was there something else about P256? I think that's actually all my questions on the P256 verifier. Is there anything that I forgot to ask you that you'd like to mention before we wrap that subject?

Nalin Bhardwaj: No, I mean, you know, DC and I are building Daimo and we have our Twitters and stuff and we'll be at ZooConnect and DevConnect with, I guess, more on Daimo as well.

Nicholas: Yeah. Well, that's it. I want to ask you, before we go, I want to ask you a couple of questions about Daimo. So Daimo is really like a stablecoin-centric, passkey-based wallet living on base. Is that a rough, good technical description of the product?

DC Posch: That's true. One thing I would say is, right, so right now we are shipping using USDC and on base because, you know, a thousand no's for every. yes, we have to be really focused. I want to make something that's like a really smooth product, like with minimal surface area. What we're going to be working on soon is adding support for cross-chain sending and receiving. And so we want to give people the ability, like in general, we want Daimo to be a stablecoin wallet. It's not going to be something that is only USDC and only base, if that makes sense.

Nicholas: Okay, got it. But it is targeting this kind of, probably a user living in a place where there's massive inflation and having an easy way to get your hands on stablecoins that are quality stablecoins. That's kind of the target audience for the product, more or less?

DC Posch: Exactly. Yeah, that's a really important audience. And I also think, you know, people in the Ethereum world, people who are traveling internationally a fair amount, like I know a lot of people who are like, you know, some of the working on Ethereum researchers, a good number of them that are nomadic and are going from place to place. One interesting thing I would say about the fiat payment app world and fiat in general is that it's very balkanized. So you've got Venmo that works in the US, you have apps that work in the UK and Europe, you have WeChat in China, but there's not really one thing that works everywhere. And I think there's sort of an additional group of people where it's really useful for them to have that.

Nicholas: Yeah, definitely. Anyone who's tried to send just cash to fiat across distant borders has encountered how painful and expensive that can be and how slow it can be also. So definitely think there's room for a product like that. And especially for people who maybe don't want or have the attention span or interest to learn that much about crypto, but want some of the actual use case out of it that it's particularly good at. It makes a lot of sense to me. So Daimo is in, is it in test flight currently or it's in the App Store? What's the status of the product right now?

DC Posch: Test flight now, App Store very soon.

Nicholas: Very cool. Very cool. So if people want to try Daimo, if they want to give some feedback, what's the best place for them to find you at?

DC Posch: So we have a sign up and a telegram channel. So we'll send the link right after this, actually.

Nicholas: Great. I'll put it in the show notes. DC and Alan, this was fantastic. It was great getting to talk to you about this P256 Verifier hot off the presses. Also I think it's the same. We just came out a few days ago, right? So we're very with it on this show.

Nalin Bhardwaj: Yeah. It's exciting. Thank you for having us. It was a great conversation.

DC Posch: Yeah. This was awesome.

Nicholas: Thank you. Absolutely. Thank you so much for taking the time. And thanks everybody for coming to listen. I'm excited to try out Daimo on my phone soon. Thanks DC. Thanks Alan. Thanks to all the listeners. And I'll see you next week for another episode of Web3 Galaxy Brain. See y'all. Hey, thanks for listening to this episode of Web3 Galaxy Brain. To keep up with everything Web3, follow me on Twitter @Nicholas with four leading ends. You can find links to the topics discussed on today's episode in the show notes. Podcast feed links are available at Web3GalaxyBrain.com. Web3 Galaxy Brain airs live most Friday afternoons at 5 p.m. Eastern Time, 2200 UTC on Twitter Spaces. I look forward to seeing you there.

Show less

Related episodes

Podcast Thumbnail

EIP-7212 with Ulaş Erdoğan, Jerome de Tychey, and Lionello Lunesu

6 September 2023
Podcast Thumbnail

Obvious Smart Wallets with Himanshu Retarekar & Jebu Ittiachen

20 September 2023
Podcast Thumbnail

Jose Aguinaga on Passkeys, MPC, and AA Wallets

22 September 2023
DC Posch and Nalin Bhardwaj, Founders of Daimo