Web3 Galaxy Brain đŸŒŒđŸ§ 

Subscribe
iconiconicon
Web3 Galaxy Brain

Paul Razvan Berg of Sablier

17 August 2023

Summary

Show more

Transcript

Nicholas: Welcome to Web3 Galaxy Brain. My name is Nicholas. Each week I sit down with some of the brightest people building Web3 to talk about what they're working on right now. My guest today is Paul Rasvenberg. Paul is a long time Solidity developer whose open source tools, including PRB Math and PRB Proxy, are integrated into many Ethereum contracts and protocols. Paul is also the co-founder and lead developer of Sablier, the token streaming protocol which launched its v2 in July 2023. On this episode, Paul and I go in-depth on three topics. First, we discuss a handful of his influential open source repos. Second, we talk about testing and foundry and recap the Branch Tree Technique test writing framework that Paul presented at ECC this year. Finally, we cover Sablier, its origins and what's new in the recently launched v2. In the course of discussing Sablier, we also touch on PRB Proxy, which is a new proxy implementation that Paul believes is a great update to Maker's DS proxy. This was an excellent conversation and it was a pleasure learning from Paul, who's not only a brilliant and expert Solidity dev, but also a generous soul. I hope you enjoy the show. As always, this show is provided for entertainment and education purposes only and does not constitute financial advice or any form of endorsement or suggestion. Crypto is risky and you alone are responsible for doing your research and making your own decisions. Hey Paul, welcome.

Paul Razvan Berg: Hi Nicholas. How's it going everyone?

Nicholas: Yeah, we got Julian here.

Paul Razvan Berg: Hi Julian. I'm doing well. And you, how are you man?

Nicholas: Good, good. I've been like going through all the stuff you've been working on lately and you've been putting out a lot of very interesting code. So I'm excited. There's like a bunch of things. I want to talk to you about PRB Proxy. I mean, even I want to go back to PRB Math, which is where I first heard of you. Testing and foundry, you put recently did a talk at ECC. talk about Sabler v2. So super excited for this conversation.

Paul Razvan Berg: Likewise. Yeah. And thanks for digging into my PRB cult with my open source. Somebody coined this term. It wasn't me. But anyway, yeah. And also, of course, very pumped about Sabler v2. We worked on it for one year and something, 15 months. So glad to have everything launched and just be able to show it now.

Nicholas: Yeah. Congratulations. It's been about a month since it came out officially. Yes. Pretty cool. Yeah. I heard you mentioned 15 months working on it and the docs and everything is very clean. It feels like very... Thank you. There's something about everything that I reviewed when looking through your recent work, especially just there's so much of it, but it feels like you have achieved a great level of mastery over Solidity. And it comes through in your Twitter and also in your code, but especially in how the projects are organized now. It's just obvious that these are coming from people who've done it before. It's not the first time you've tried doing these things. So a lot to learn from the code that you're writing. Actually, that was my first question for you. How did you get so good at Solidity?

Paul Razvan Berg: Well, I mean, thanks for the kind words. Like the way I explain it is just the OCD. I mean, my team can validate this that I often end up debating all kinds of small things. But my position on this is that look, the end product is a combination of multiple small things. And yeah, so to answer the question, it's not just one thing. I've been developing... I've been writing software for 10 years, Solidity for five years.

Nicholas: What language did you start in?

Paul Razvan Berg: I started developing iPhone apps. So that was Switch and Objective-C, I think, like 10 years ago. But now I have no experience anymore. Like if you showed me some iOS code, I would look like a new language. But back then, that's what I started with. Anyway, and yeah, so it was just like a... I basically have an obsession for high security and never losing money for our users. And with that goal in mind, you just end up doing all the necessary things to get to that point of like, okay, now I'm ready to go because I did all the humanly possible things. And yeah, just a bunch of different small things, basically.

Nicholas: So basically, practice makes perfect.

Paul Razvan Berg: Exactly. Yeah. I'm a big believer in that. I mean, I also appreciate theory. I think having the right kind of philosophy helps a lot. And here I want to mention, you know, Popper and Deutsch, big influences on me. But it's also a combination of like applying that in practice, which takes toil and effort and debating over minutiae on GitHub, which individually, they seem like small things. But when you look at the end results, then each one of those play the role.

Nicholas: And one thing that I noticed about when I see you speaking or when I see you publishing open source code is, unlike a lot of very expert Solidity devs, you're very... There's many people who share code or share knowledge that they've gained, but often there is a tinge of elitism about knowing it and other people not knowing it, which seems so crazy when the language is so new, as you mentioned in your test talk at ECC recently. And I feel like what comes through in a lot of what you put out is friendliness towards developers who maybe don't have the experience to, you know, or don't have the solution to a problem. And you've had to encounter that problem previously. So you're able to just share it in a way that's more welcoming, I find, than the average expert Solidity coder. So I want to thank you for that.

Paul Razvan Berg: No, thank you. Yeah, I do want to give a shout out to Popper again here, which has been my main philosophical influence. That attitude flows from my core fundamental belief, which is that... Let me just quote this amazing...

Nicholas: Yeah, please. This is Karl Popper, right?

Paul Razvan Berg: Karl Popper, yeah. So I've got a tweet about this, but the idea is... His quote is, while we differ widely in the various little bits we know, in our infinite ignorance we are all equal. And this is a core tenet of mine. I actually believe that there's an infinite amount of stuff that I have no idea about. And being an elitist about what I know, it's just one small bit. Like yeah, let me share it so others can learn from it. And let me share it because maybe somebody will find errors in it. And there's a bit of selfish interest there. But even besides that, just in pure, absolute terms, I can just never forget this mindset of like, there's so much we don't know. Like everybody, each one of us, there's so much we don't know. And I just can't get that off my head.

Nicholas: Yeah, I think there's something about engineering culture, software engineering culture maybe in particular, where your value is based on how much you know. And so there can be some... And also your value as a person, as a mind, as a thinker, and as an employee, as a contributor are all financially and socially weighted by how much you can do. And so there can be all this gatekeeping comes out of the incentive schemes. But it's so irrelevant because you can be a huge jerk about how much you know. But ultimately, yeah, as you say, you don't know about most things. It's just not possible. You have this cost of optionality, of choosing what you want to study. So even if you're an expert in something, you don't know about something really very close and next door, people who are experts, as you say, about iOS, who can remember what they were working on 10 years ago precisely. You mentioned also David Deutsch. How does he inspire you?

Paul Razvan Berg: Oh, I've read his books and beginning of infinity and fabric of reality. It's a nice worldview. There are other people more better equipped to present their worldviews, but I'm just a follower in the trenches. And I think they offer a very fresh and modern take on philosophy, physics, and it's very consistent. It applies to basically every domain. And it's a nice, you know, like, guideline to keep in mind when you know, building, especially when creating a new knowledge, or if you code as new knowledge. And I find it helpful to have a bit of understanding of, you know, epistemology and philosophy when you're creating new knowledge.

Nicholas: I'll have to read that fabric of reality. I haven't read that yet. Sounds good. So when you first got involved in crypto, did you go straight to Ethereum straight to Solidity or did you dally around try other things?

Paul Razvan Berg: It was Ethereum. I loved it from the get go. People usually dive into Bitcoin or whatever. But for me, it was, you know, I joined the space in 2018. And I remember vividly when I was sitting in this random Starbucks in my hometown in Romania. And I deployed my first contract with Truffle. I think it was a token or something. And I said, look, like, I'm a random guy. I was, I don't know, 2019 or something. And I have this power of like earning financial contract from this cafe. I was just immediately hooked. I bought my first Ether. So I first bought Ether and then Bitcoin. Quite proud of that. And yeah, I just love Ethereum. I love the programmability aspect of smart contracts. And I love that which you don't have with Bitcoin. And I was hooked.

Nicholas: You know, it's a little bit off topic, but I've been thinking a lot lately about the two different styles of programming, like how in Bitcoin you can add meaning to the chain by doing like off chain indexing of things like ordinals and inscriptions and other things in the past also, versus like doing the computation inside of every node, like executing the code on every node in Ethereum. Do you have any thoughts on that? That's just been on my mind lately. I'm curious if you have thoughts on what the model is for execution of code in blockchains and if Ethereum is just so dominant that nothing else matters. Or it seems to me we're heading towards a world where with ZK and L2s, a lot of the computation is actually being done off chain and then just verified by some stack that lives off chain. Maybe there's some connections into the chain, but you're not actually doing the compute inside of EVM, especially on L1. Do you have any thoughts about any of that stuff?

Paul Razvan Berg: I'm a proponent and believer in the modular blockchain approach to this problem, which involves having like separating, you basically have multiple roles that a blockchain plays and you give different blockchains different roles. And the Ethereum blockchain as it is known today, it does everything. It's like, imagine like a big factory doing like a car factory doing all the components in the same factory. The modular blockchain approach is, we will do the chassis with this contractor, we will do the windows with this other company. And it's the same idea here with the modular. And specifically I have in mind platforms like Celestia, QoL, I think they're doing great work. But I mean, also just, I mean, Ethereum itself, if you look at the, as you said, like Layer 2's arbitrary optimism, they're also in a way doing this, although I think they're doing it in like a more, they're not necessarily conscious that they're doing this. But generally the point is, you use a blockchain as a data availability and consensus layer, which means that you basically order blocks and you put data into blocks. But you don't necessarily look at what's, what stake is in there. So you delegate the execution part to some other draw or some other blockchain which only does that. And you basically separate the execution of contracts from the data. And this is the core Celestia thing. And by the way, I have no stake in this. I just think they're doing great, like great work. And like the way it works, you know, on Celestia, like post everything, you post all the data, but nobody in the Celestia blockchain itself, nobody actually verifies that that data leads to like logical state transitions. You have to take the data on to like a roll off and then that roll off will process the transactions for you.

Nicholas: I was listening to old podcast interviews from like 2020 with John Adler. Did you know him? Yeah, he's great. Yeah. So I'm just trying to get up to speed with all the L2 stuff. And this is exactly what you're saying. Because I think fuel and Celestia was lazy blockchain or lazy ledger or something before.

Paul Razvan Berg: Lazy ledger, yes.

Nicholas: Yeah. So exactly. And it reminds me, when I came into the scene, I learned a tiny bit about Rweave, Smartweave, which I think is essentially the same. Like anybody can post data to this data chain, but doesn't imply that the state transitions that they suggest are valid. You have to then do the validation of the state transitions of this data off chain and decide whether or not it actually affects the state, which is also very similar to both ETH descriptions and inscriptions on ordinals on Bitcoin, where anybody can post data, but the protocol is like a standard tradition that then has to be implemented by an indexer that runs off chain and decides whether or not the state transitions that are proposed on chain are actually valid. So I don't know. There's something about this that I'm not sure. Are you creating a new problem of a new validation, validator set problem around the indexers? Or are you essentially creating a side chain by doing that, where you're going to need some new consensus or trust in individual indexers or not? But it does seem like there is something going on here that's more complicated than just like, oh, we're going to spin up straight up one-one EVMs as L2s, like maybe L2s that are more for data storage or specific things that you might want to call into, even from other chains where you just want some specific functionality that's available on another chain that's really specialized for it. It seems like there's something going on in the tension between whether the execution happens.

Paul Razvan Berg: That's right. Like the word for this is the notion of sovereign roll-ups, where you basically have control over the base layer and that changes the game. Because like you can use the model for things like app chains, but you can also use it for full blown smart contract platforms like Optimism and Average Runway, whatever. So my guess is we will definitely see increasingly more species of roll-ups that are not necessarily full blown EVM general purpose computers, but at the same time, they're not like, you know, targeted app chains for I don't know, DYDX or whatever. I think there's like an entire new design space in between, which will be powered by this modular blockchain ecosystem.

Nicholas: Do you think things like Superchain are required for that? Or I guess my real question is like, do you? right now the way people think about it is you have to move to the other chain in order to gain the advantages of its affordances, like let's say it's cheap transactions at scale. You move your, you either put something in a vault on L1 and have it appear on L2 and then manipulate it there, or you have native L2 assets, be they fungible, non-fungible, or other contract state. But you move the locus of execution to the L2. Do you foresee it being possible to call out to other L2s within the space of a transaction on a chain that is more general purpose in order to get functionality from purpose built chains?

Paul Razvan Berg: I'm not really sure. I'm not a researcher for cross-chain bridging systems. But I think for simple transfers for simple apps, that in principle seems doable. I see no major roadblock for that. But for I know, generalized, like cross-chain smart contract interactions, that seems a bit more tough. So I think what we will definitely need is a bunch of new, like bridging solutions that are as trustless as possible. That's hard, of course. But I know there are many smart people working on this. So I'm hopeful.

Nicholas: It's gonna be great quizzing you on stuff. That's not your expertise. So it's not really a main thing. But when I was working at Juicebox, I encountered in the code for the first time, PRBMath, which was very useful. Can you just say that I feel like that's popular. I'm curious what the story is of PRBMath and also if any other of the little libraries that you've written to help yourself have become sort of popular to your surprise or amusement.

Paul Razvan Berg: Yeah, thanks. PRBMath is this two-year-long project of mine, which I built in 2021. It's a math library, a solidity library for advanced fixed-point math. So it gives you everything from basic multiplication and division with fractions and percentages to more advanced functions like logarithms, exponentials, and power functions. And the reason why I built it, I was working at HiFi at the time, this fixed-rate learning protocol. Now they've pivoted to NFT projects. But I was looking at it like we needed a math library that had power functions, fixed-point calculations and so on. And I looked at what was available and basically no library matched my code family. In general, when I go look, people they typically install a dependency and they, oh yeah, look, it has 50 stars. It's fine. It should be fine. But what I do is I literally go look in the source code and I see how they have tested it, how well it is documented. and just looking at the source code to see, does this look like a proper library that I can put my faith in? And I looked at all the libraries available back then and nobody matched all of my standards. Maybe there was one exception, there was ABDK math, which was built by this auditor, a very, very smart guy, Mikhail Vladimirov. However, it was the user experience of using ABDK and I think others can confirm this, but from my experiences, it has binary numbers, the documentation wasn't that great, like not the comments and all that and I couldn't see any tests. So long story short, I looked at that, I said, so this is like a fun project to take something like useful to have, not just for me, but for the entire ecosystem, because math is especially like arithmetic and fixed point. math is like so, it's like a primitive you need basically in most advanced complex D5 projects. So I said, let's learn, in a way I use PRPMath to learn the latest features of Xaluit, it's like my testing path for initially, then it evolved into a proper project and now it's like audited. and now I look on NPM and has like almost $2,000 on a weekly basis. So I think it's one of the most popular math libraries in D5 at the moment. But to go back to where we were, yeah, so it was a combination of like a wanted, I wanted a library which was at the same time intuitive, efficient and safe, and had good tests, was clear to follow, very well documented, but if you go look in the code, I basically use English to explain every single mathematical step. And yeah, I think I spent two or three months building it, including in my free time, during the weekends and so on. And then it came out, people liked it and I continued to develop it. And now with the latest version, version 4, I think I built something super cool because as far as I know, PRPMath is the leading project for a very cool recent feature in Xaluit called user defined data types. And I call it a library, but in fact, it's not like a library in the strictest sense now. It's a collection of user defined data types, which are abstractions on top of the like raw UI ints and ints that the library uses. And the benefit, the big benefit is type safety. So as far as I know, with all other libraries except PRPMath, you don't have type safety. So for example, if you accidentally multiply a USDC amount by a die amount, which have different decimals like 6 and 18, in some cases, that will be fine. But in other cases, it can fail in a very bad way, in a subtle way. PRPMath, that can't happen because you will see that the 18 decimal type has, it's called UD60X18. So the compiler will not let you even do that computation until you make the conversion.

Nicholas: That's great. So the fact that it's just a bunch of types that you're importing rather than a library, I guess that really makes it cheaper to deploy. also, you're not bringing in as much code, you're only bringing in the types that you need for your project, I guess.

Paul Razvan Berg: Yes, that's one of the advantages. It's the fact that with user defined data types, the only bytecode that is imported is the bytecode that you're consuming. Although that was true with libraries as well. With types, it's mostly a question of type safety and also user experience. It looks nice to see those math types over there.

Nicholas: That's cool. Are you in touch with the Solidity folks? I feel like they're not really so present on Twitter that much. But are you because you're so deep in it? Are you close to the people who are actually building Solidity?

Paul Razvan Berg: Kind of close. Yeah, you know, I've been posting along the way, I found various problems in Solidity or things that were missing and I reported them as feature requests and whatever. But the Solidity crew, they tend to chat mainly in the metric server they have. Also on their forum on GitHub, but not so much on Twitter. I agree they should be more vocal about themselves and about what they've built because I think version 0.8 is actually great. But yeah, I agree with that.

Nicholas: Yeah, it's interesting that they're not. I mean, you talk about shadowy supercoder. They're really like, I don't know, they're in Berlin primarily, or I guess maybe they're all over the world.

Paul Razvan Berg: Yes. Mainly Europe. But yeah, Berlin and Poland, I think.

Nicholas: Just imagine it's so funny, the difference between like, you know, there's so many app player projects that present themselves as protocols, but then the real like infrastructure teams and especially Solidity, even more so than like all core devs or things like that. It's so there's there's you don't really encounter them. and yet their influence. if they were VC backed team, which maybe they are on some level, but geez, imagine the kind of noise they would be making. They're like the most popular new language for blockchains. And yet most people don't even know their names. I was reading Cryptopians and read that there was, I can't remember the person's name, but someone under Gavin who did the majority of the implementation of the original Solidity. I don't know how much of that stuff is still relevant, but it's interesting with like things like lately, like the the Viper

Paul Razvan Berg: compiler

Nicholas: bug or whatever the bug was in Viper recently and language diversity and stuff that we don't actually talk about the people who are writing the languages that much. We don't know them so well.

Paul Razvan Berg: Yeah. Who was the person? I don't know about them. You said the name.

Nicholas: I'm going to look it up while you're answering some future questions. Try to try and find you the name. But it was in Laura Shin's book Cryptopians. Oh, OK. Which is, you know, I can't vouch for it, but I like Laura Shin. And in the story, they talk about how there was someone working underneath Gavin who I think wrote, did a lot of the work. Interesting. Actually defining Solidity. So I'll check. But that's another thing like Solidity. I wish I understood the history of it a little more. You said that the .8 version is particularly good. Is there something in particular that makes you say that? or what is it?

Paul Razvan Berg: It's a bunch of things. You know, is the user defined as the user defined operators, which were added to some version 0.8. Twelve or something anyway.

Nicholas: But people aren't even using them so much yet, or at least I don't see. I'm using them.

Paul Razvan Berg: Yeah, well, you're the leader. I'm like a hardcore early adopter in general. Like I love testing on new things and so on and so forth across domains, by the way. I'm a full-time longevity. I'm like experimenting with all kinds of drugs. But anyway, to come back to the point, like Solidity version 8. I mean, firstly, it's like more, you know, it's less error prone because they have checked arithmetic while still enabling you to access checked arithmetic with unchecked. Right. I love that. I think that should have been the default from the get go. So, you know, there's that. There's the user defined things we talked about. All kinds of like gimmicks, which individually they seem like small things, but combined in a third party user experience. For example, you have ABI.encodeCall. So you can have typed call data generation. You also have string.concat. At Sager, we worked on this NFT descriptor that is, you know, inspired from Uniswap V3 positions. And if you compare our implementations, like ours is like, you know, 50% cleaner just because of that string.concat thing where you don't have previously. you had to do ABI encode packed and then cast to a string.

Nicholas: Yeah, yeah, yeah.

Paul Razvan Berg: Yeah, it was super messy, like bytes in there somewhere. And with string.concat, it's like just string.concat, string.1 and string.2. And super beautiful and nice.

Nicholas: Can you just explain it very quickly, the ABI.encodeCall thing you just mentioned? I haven't used that yet.

Paul Razvan Berg: Yeah, it's pretty cool. So the ABI is typically when you want to, you know, encode, you know, like a low level call, you use stuff like ABI.encodeSelector or ABI.encodeCloud if you want to go even more level. But with ABI.encodeCall, you can pass the interface and the function name as a pointer. And if you don't provide the values correctly, as in they don't match the order of the function, then Solidity will complain. And that's quite neat for low level calls, very good calls and all of that jazz.

Nicholas: That is cool.

Paul Razvan Berg: One other thing. Oh, how can I miss this? It's custom errors. I'm a big custom error maximalist. They reduce the gas cost by 90%, 80%. And I also think they look more beautiful, although I think this is subjective.

Nicholas: I don't know. They look beautiful in the error logs, maybe. But in the if, the inversion of the logic of require is unfortunate. with the if for a custom error, that it's the opposite logic of a require statement is maybe not the best.

Paul Razvan Berg: No, I hear you. But I think this is like, I mean, typically in software engineering, that's how you do it. It's like the custom error way of like, right, make an if and then you revert. So I think the require thing was like a lazy, like a lazy approach for a long time that people got used to that.

Nicholas: But I like this about Solidity that it's legible for someone who's familiar with JavaScript, even though it's typed, even though it has all these other things going on. I like that it's people are, I like that. it's an accessible language. I think that's really, really cool. It's not, yeah, it's not a minor thing. Of course, people who are pros can be annoyed by aspects of that. But yeah, it seems like, I don't know, things like, maybe even things like VIR fix that or just dropping down into assembly to get your C port level optimization done or something. But I love it. that, you know, when people realize when the light bulb goes off, when they realize what an NFT contract actually is, is a joke.

Paul Razvan Berg: You're saying, you know, as a matter of like, fun fact, we have a junior developer at Xavior and Solidity was his first programming language and he loves it. I mean, it's great. Like he was able to pick it up quite quickly and do awesome stuff. And speaking of VIR, that's another cool feature in version 0.8 where like, you have this super powerful optimization just by enabling like one flag in your settings. Although it's a bit of like a double edged sword because to enable VIR you lose like coverage and tests. What we did was we now have this complicated setup where during testing we use the simple way of no VIR but the deployment. we use VIR. The trade off is worth it if you know what you're doing and you have the time to spend on it. But otherwise, I would actually not recommend to use VIR unless you really have to because of the contract size or something.

Nicholas: You said you lose gas and coverage testing in Foundry?

Paul Razvan Berg: Yeah, you completely lose test coverage, not gas, although I'm not actually fully sure but test coverage, absolutely. You lose that in Foundry.

Nicholas: I noticed that you, it makes the errors much harder to read because they're errors corresponding to code that is not the code you wrote.

Paul Razvan Berg: Yeah. Yeah.

Nicholas: Plus, really bad. And I had used it in the same context. you're suggesting or implying the stack too deep and like a token URI generating some on-chain SVG or other XML. You have a, it's tempting to use VIR because it lets you just get so many more stacks, so many more variables on the stack. But yeah, it ends up making testing much more painful. And also there's challenges with verification on EtherScan sometimes doesn't work. Even I just heard the other day that there are security audits finding problems with contracts that are using the optimizer and that the optimizer is introducing vulnerabilities into contracts and you shouldn't just assume that it's safe to use.

Paul Razvan Berg: Interesting.

Nicholas: Yeah, I don't know if you saw that. I'll try and find the tweet.

Paul Razvan Berg: No, I haven't. But this is interesting.

Nicholas: Yeah, worth checking.

Paul Razvan Berg: I mean, also, my problem with the optimizer is that the optimizer runs should not be called runs. It's like, it's a misnomer because what that actually does is like, it's more like how many times do you expect the contract to be called? Although that's a different, like a different, like a difficult concept to communicate in a single word, I guess.

Nicholas: Actually, I forget how it works, but it basically increases the deploy cost and decreases the transaction cost in practice. But I don't know how it actually achieves that mechanistically.

Paul Razvan Berg: Well, it's what I said before. Like, it looks ahead, assuming that a contract will be called n times. And those are the runs. So that, you know, like the more the more. So if your contract is expected to be called like a whole bunch of times, a million, a bajillion times, then yeah, like a high optimizer runs makes sense. It will increase the bytecode, but it makes sense because it lowers every subsequent call.

Nicholas: Does it like reorganize the functions in the bytecode so that they're or does it change the names or something so that they're cheaper to call?

Paul Razvan Berg: I think it operates on a bytecode, but I'm not fully sure. Like I haven't dived that deep, but I know the mechanistic explanation of what happens if you turn it on and off and like tweak it and so on. And yeah, like my grasp with that is that the optimizer runs should be renamed to something else. I don't know what, but net runs.

Nicholas: Right, right. Something anticipated popularity or something like that.

Paul Razvan Berg: Yeah, yeah.

Nicholas: Something a little bit mouthful. OK, so that's awesome. So PRB Math, we covered it. I think it's in Uniswap v3 even, I think. Is that right?

Paul Razvan Berg: It was in a documentation website, not in Uniswap v3 itself. Hopefully in v4 and some hook based AMM in v4. Somebody will use it.

Nicholas: No doubt.

Paul Razvan Berg: But no, it's not a budget project. I think you said just a good box.

Nicholas: Yeah, we used it for multiplication and division, if I recall. That's how I used it in whatever little contract I was writing. Oh, we forgot to mention about Solidity.8. safe math, right? You don't need to import safe math. It's all safe.

Paul Razvan Berg: That's the tech arithmetic I talked about in the beginning. But yes, yeah, yeah, exactly. You don't need safe math. I tweeted about this at one point. If you're still using safe math in version 0.8, you're doing it wrong.

Nicholas: Yeah. I also saw... Yeah, yeah. Oh, sorry. Go ahead.

Paul Razvan Berg: No, no, I just wanted to briefly mention PRB Math. We're also using it in CBR v2. And if you go to the... There's like a dependence page on GitHub. There's a bunch of DAOs and protocols using it in various capacities. Anyway, sorry, I hope you were saying something.

Nicholas: Yeah. So, OK. So we talked a little bit about PRB Math. I wanted to have the first conversation about these kind of big ecosystem contributions that you've made. The second part of the conversation we're going to get to in a minute is testing and foundry and then PRB proxy and finally Sabler, because I think that's going to be the biggest part. But we're running out of time. So you also put out a shell script recently for turning Natsbeck into Docusaurus, which is so dope. I can't help but mention it.

Paul Razvan Berg: Thank you. I didn't expect for people to like that that much. It's basically like an automated way to turn your Natsbeck comments into Docusaurus markdown file system.

Nicholas: Super great. And especially for someone like you, who's writing really detailed comments. That's awesome.

Paul Razvan Berg: Great.

Nicholas: And do you use that in the Sabler v2 docs at all?

Paul Razvan Berg: Yeah, it's fully automated. So the way we go in our docs, we only write the English explanations of how the contracts work together. We don't document the low level reference. that's fully 100% automated by the script.

Nicholas: Like a GitHub action or something?

Paul Razvan Berg: Oh, it's like we just run it manually when we make an update to the contracts, which we don't make that often because the protocol is immutable and not operatable.

Nicholas: Got it. OK, so testing and foundry. You did this really cool talk suggesting some best practices for testing and foundry at ECC about a month ago, according to this in 2023 in August. I want to talk to you about that. But before we get into that, when you start a new Solidity repo, what do you fork? What's your template?

Paul Razvan Berg: Oh, but I have multiple templates. You go to my GitHub and you search for foundry template. There's one which is kind of popular. It's foundry-template. And that's what I use. I keep that actually very... I maintain that continuously. And every time I learn something new about foundry or there's a break in genuine foundry or Solidity publishes a new version, I go to my template and I upgrade it. That's what I use for every single new foundry project. I actually have more than just foundry. So I also have a hard hat template, which I used to use hard hat for foundry was a thing.

Nicholas: Yeah.

Paul Razvan Berg: I mean, they're, you know, amazing work. But I mean, foundry is...

Nicholas: And they're also working on Forge testing, Solidity testing, like a long time ago, I heard. But foundry is just simpler for non-JavaScript gigabrains like me.

Paul Razvan Berg: Okay, so I have that. I have some generic programming templates for ROS, TypeScript, JavaScript. So what I do when I like learning a language, I synthesize my learnings into a template where I put the basics like, you know, in JavaScript, I put the package.json, some dummy Hello World script. And I always go there. I don't like the idea of having to go again through some documentation website, for example, for ROS cargo. I don't want to have to go back to the cargo website. I just go to my template. I see my three, four essential commands. And that's what I do. And I recommend everybody do the same thing. I think it's awesome to have your own templates that you can always spin up when you need them.

Nicholas: Totally. I'm definitely going to fork yours. I have my own template, but I'm sure you know some. You know, dealing with dependencies can sometimes be a challenge and is not the interesting part of working on a project. So I'm sure you've got some good tips for that. Are you. are you importing everything with Forge or are you using Yarn or NPM or PNPM or something?

Paul Razvan Berg: It's a mix of both. So for like contracts, it's for some modules and for like non-solid to start, for example, I'm still using Precure for formatting Markdown, JSON and so on. And for that is I'm using PMP. I think it's pretty cool. I recommend using this over like Yarn and PMP. Yes.

Nicholas: Oh, sorry. TNT? No, PNPM. Oh, yeah. That's what I used to. It's great. It's great.

Paul Razvan Berg: Yes. It's pretty cool. Yeah. And just like the output looks amazing.

Nicholas: And as long as you don't hit a snag with it, as long as it doesn't cause a problem.

Paul Razvan Berg: Yeah. You do have to maintain your dependencies, but you don't have to do that often. Like what I see happening with my templates is whenever there's like a necessary update, somebody will make a PR because that's what the open source people do. Yeah.

Nicholas: And are you using? you're using remappings.txt, you're outputting something to that or you're using the automatic stuff? I always feel a little bit uncertain about how Forge is figuring out its automated remappings.

Paul Razvan Berg: Same. And speaking of this, I actually had like a long winded debate about this on GitHub. I opened like two or three issues. There were some bugs they fixed very quickly. So kudos and props to the Form2T team for this. But I think remappings in general are, well, bar a mass. I actually tweeted this remappings are a mass a few weeks ago. And not a big fan. I even talked to George about this. I think they're like considering to go to move to a fork of cargo, which would be pretty cool. It's still at the idea stage. Don't have any expectations anytime soon. But that would be awesome to have like a like a Rust level dependency manager. But until then, it's remapping that we use. Yeah. In terms of like automation, no, it's like we write them manually. And it's still the case that Forge will add its own default remapping. Yeah.

Nicholas: Yeah. That's what's weird. If you have a remapping.txt, it doesn't override everything. It's still so weird.

Paul Razvan Berg: It's weird and sometimes it's going to break in all kinds of unexpected or recursive ways.

Nicholas: That's the issue when your submodules have yarn dependencies or and you've remapped something, but you have the same dependency, but you've remapped it differently.

Paul Razvan Berg: Yeah. And I just spent like a few hours, two weeks ago on this particular issue with one of our SaveDirect2 users who wanted to import, integrate SaveDirect2 in a hard hat project. And we're using Foundry, but we have an npm package. And oh, hell broke loose. I found like two bugs, one in hard hat, one in Foundry. But even after those bugs were fixed, there was still like another limitation. And now apparently there is some speculation of there being a bug in SolidWrite itself with the remapping. So it's kind of insane. I don't know what solution there is other than poking George to implement that cargo dependency manager sooner.

Nicholas: It's good because he's a perfectionist. So I'm sure it annoys him just as much or more than it annoys us. It is.

Paul Razvan Berg: Yes.

Nicholas: It's great. I mean, the dev tooling is wonderful that they're building over there. And you're doing God's work, filing all these reports and figuring things out. So, okay, testing in Foundry. What are some of the, I really liked how you set it up in the ECC talk, because it's kind of like Solidity is just new. Foundry, Solidity-based testing is maybe two years old, a year old, something like that. And people don't have best practices. People are naming their tests like test one, test two, test three. And we need some standards, which is wonderful because my experience of coming into the Foundry community is, it started off with some very galaxy brain kind of people. And there's like a presumption that you've already done a lot of testing and other frameworks and you know all kinds of stuff. And everybody's sort of like dealing with these remapping problems, but just sort of getting it to work and then not trying not to touch it. So I think doing this kind of work that you're doing, really clearly coming up with like a practice, a best practice for how you can go about this, which we then criticize and improve on over time, is really a high quality thing to do for the community and for your own projects, too, of course. So I'm curious, can you tell us a little bit about, I saw you broke down the types of testing categories. And then maybe we can talk about the tree technique in a minute.

Paul Razvan Berg: Sure. And thanks for the shout out. Yeah, I spent quite a bit of time working on my presentation, so I'm glad that people liked it. As you said, the problem with Foundry is that we basically as a community, we have maybe one year and a half or something like that. Since more of us and like big numbers started using Solidity to test their entire protocols. In their defense, there's MakerDAO, who was the Dapp team, who were like in their own island using Dapp tools and all that jazz. The original Dapp test, right? Yeah, those guys were the original Dapp people for Solidity testing. However, they were kind of like isolated. They were using this weird package manager called Nix, which every time I tried to use, it just broke my computer. But anyway, Foundry really solved it in one fell swoop with their ROS implementation. And that happened only in 2022. And the problems I noticed at a very high level, looking at various codebases in Foundry, is that we don't have directed development. There's a lot of unstructured files. There's basically no categories. There's just like one big test folder and there are almost no plain English specifications. And one other thing that I missed from the JavaScript ecosystem, from the hard hat ecosystem, is the describe block hierarchy. If you guys are familiar with Mocha and JavaScript, you can have these describe statements, which start with a string, which you can put any English in there. And then you can have this anonymous function, which gives you this hierarchy of, you know, mini plain specification and then a function. And then you can have nested describe blocks within other describe blocks and so on. And I was actually using that in my previous projects. I was using that model to nest conditions. And the point was, you know, you have a function and for that function to be even executed, you need to have a deposit made. So I was having something in JavaScript like three years ago of having a describe block at the top saying, when the deposit was made, do this. When the deposit was not made, do this. And I just, for a long time, when I picked up Foundry last year, I just couldn't find any solution to this. Like, how can I replicate that? Because Solidity doesn't have anonymous functions. You can't really do that hierarchy of like, you can define a function as a parameter. Solidity actually accepts function pointers, but you cannot define functions and immediately pass them as parameters. So that was a problem.

Nicholas: And when can you pass functions as parameters?

Paul Razvan Berg: If you have the function defined elsewhere, you can pass a pointer to it. It's like a function type.

Nicholas: Right. I remember seeing this recently and thinking, wait a second, is Solidity actually JavaScript?

Paul Razvan Berg: Well, not quite so. because of this thing, you can't really define a function. You can only just, I think you can only use a type and then you have to provide the function and then that will eventually be executed. But anyway, so the problem is, how can you have like a hierarchy of test conditions in Foundry?

Nicholas: Because in Mocha, you have things like this describe function where you put in a bit of text where you say like, when the user sends a transaction, and then you have within that describe function in its code block, you have a it function. And then the it defines in a string and a parameter in the it, you say it, you know, should transfer 100% of the value to the recipient or something like that. So you kind of, and then you can have multiple it's that hang off of the describe statement. So in a certain circumstance, this and then that and then that and that should happen. So it's kind of got a, it's not invariant exactly, but it is a testing logic that has a kind of Englishness to it.

Paul Razvan Berg: Precisely. And I thought deeply about how can I replicate that in Solidity itself. And I couldn't think of any, any specific way and like just pure Solidity. I toyed with those function times for a while, but like nothing came out of that. And I said, look, what is I just like? moment of realization, what is a function in not just Solidity, but in like a program and computer program? It's well, it boils down to bytecode to like opcodes, right? But fundamentally, like emergent level, what is a function? It's basically a tree of execution paths. So if you have a very simple function, which is one S and like an if else statement, then you have two execution paths, assuming that the conditioning, the if is binary. So with that idea in mind, with that conjecture, I said, okay, look, what if I turn that execution path into like a visual way? How can I do that? And then I discovered, I mean, I knew about, you know, ASCII tree, like tree format or it's typically used for representing folder hierarchies. And if you know the tree command, tree CLI and the terminal, that's the formatting tool output. And I said, look, what if you use that tree format extension to like write up, like write your English specification for what are the execution paths that should lead to what tests? And I just went along, I found a VS Code extension that was giving me syntax highlighting. And I just experimentally from there, I started using it for Saber V2, that was at the beginning of this year. And I, you know, slowly but steadily worked my way up. I saw that it actually works at scale. Now we have more than a hundred or something that trees in our own codebases. So it can be used in principle for anything, for like any kind of function. But it works for any kind of task, but it works better for unit and like simple integration tasks between multiple contracts. For invariants, I had some ideas for like applying it there too, but that's like more like a work in progress ideation, brainstorming process. Like now, like the way we use it, it's like both like an English specification framework that is, by the way, cheap. And the only requirement is that you have to keep these tree files in version control. compared to other more advanced models like, you know, Sartora, which are by the way, amazing software, but like to learn Sartora and stuff like that, like you really need to put in the hours, the date, whatever. Whereas this model is just plain English, you just, you know, think deeply about all the possible execution paths. And then as you said in like Mocha, you have to eat, you basically say it should do this, it should do and then it should do B, C, D and so on and so forth. And you create like a tree of again, of the conditions and expectations, you know, like, and, you know, I published this like shortly before ECC and then some people chimed in, liked it, they gave me some recommendations. I didn't know that actually there is this more formal specification language called Cucumber Gherkin, which is like a traditional software framework for... It's like a basically like a branching tree technique on steroids, like a very formal way. They have like a few thousand keywords. But the funny thing is I actually didn't know about that framework when I came up with this model. But it turns out that there are quite a few similarities between branching BTT in short and Cucumber Gherkin. Both are good. But you know, the BTT thing is like super quick and just works very well at scale. And you don't need any framework. With Gherkin you need to have like the CLI and there's like a framework implementation or whatever. But anyway, to cut the chase, yeah, you know, the benefits and I will end here, I know I have like a long monologue, but the benefits are plain English, you know, easy to learn and teach. You can share this with your non-technical team members. And in fact, that's what we did at Sibio. We shared these trees with our frontend team and they, you know, they followed the script in a way. For example, when we designed the CreateStream form, they said, OK, these are all the conditions I need to check for the CreateStream transaction to be valid. And, you know, that tree basically functions as both like a guideline for both Solidity and your frontend team. And it's also potentially automatable. There was this guy, Alex, from I think he's working at Sense Protocol. He built just last week, I forget the name of the tool. Let me look it up because he deserves a shout out. But he basically built this simple CLI that takes tree files written in this ASCII tree format and then it outputs a skeleton Solidity file with your function names and your modifiers which replicate that tree. We didn't even dive into the modifier idea. But the long story short is that there is not even a CLI that can help you automate the writing of tests.

Nicholas: Amazing. So basically you... OK, I have a bunch of questions and I do want to get to that modifier thing you mentioned because it is an interesting subtle thing. And I'm curious if it's a good idea or not. But basically you write out all of the potential states of your contract in a text file which you append .tree as an extension, put it in your repo. You do it for every function. In the talk, you break down the different kinds of tests, which sure, everybody knows, but it's good to talk about that. You have unit tests which you really think of as really strictly only one contract, like the one function not interacting with any other contract. Integration is anything where it's touching another contract. Invariant are states that you know must be true or false. And then forking you do separately. I thought it was interesting that you break forking off separately because in the most recent large project I did, I just did all of my tests in a forked environment. But it's interesting to hear that you do it separately. And maybe it depends on the project. And then you also describe like concrete versus fuzzed tests, which can be useful for all of the previously mentioned types of tests. So then you generate this. I also don't want to forget in the talk you mentioned like naming of tests. You give an example like test fuzz, revert when foo as a good example, as a good name for a test rather than just, you know, test transfer one, test transfer two. So there's just a lot of like very basic things about testing that I think you're kind of suggesting some best practices around. And who knows if people will adopt all of these things. But it's much nicer to have this conversation clearly and in the open rather than, you know, oh, just go look at the tests of some popular repo, some high quality repos and try and figure out some best practices. I really like that we're having this conversation in a public and accessible way where something like a dot tree file rather than some highly specified formal verification language maybe would only be used by, I don't know, the Yearn devs or something. In this case, I think it's very accessible. And if people at the high end of the Solidity elitism scale or pro scale want to do something more formal, they can obviously swap for something else. But the tree thing is very, very accessible. A couple of questions. Maybe can you describe how you're using the modifiers? These are modifiers that have no content, right? They're just like to document what the function is testing.

Paul Razvan Berg: They don't have to be empty. So let me backtrack a bit to explain the problem there. So you have the tree file and you write your hierarchy of like English sentences and so on. Then the question comes, in Solidity, how do you mirror that hierarchy? And when I zoom in on a particular function, how do I see what are all the states, all the conditions that I have to pass for that test itself to be executed? And what I discover again, very empirically, experimentally, is that a nice trick to aid you visually is to use modifiers which you apply to all the functions that are nested in some particular path in a tree. And let me give an example because that's easier. So suppose you have, you know, when the deposit was made, when the caller is Alice. And the test that sits between both of these branches, nodes in the tree, you would have two modifiers, when deposit made and when caller Alice. And you apply them to the final test in your test file so that you can have this visual feel that, oh, look, these conditions have to be met here for this test to pass. And to other tests which don't have to have those conditions for them to pass, you just don't apply the modifier. And so basically, at the top of the file, if you go to our repo, v2core, at Sabre Labs on GitHub, you will see that at the top of the file, we have a few modifiers applied. But as you go down and you basically near the end of the function execution, basically the complete function execution which passes, you will see lots of modifiers. Now, to finally answer your point about being empty or not, you don't have to be empty. Like you can actually have the state set up within a modifier. And that will give you some shared logic because you can write it once and then every time the modifier is applied, then the logic is implemented with that modifier. But oftentimes, they are empty. And their only goal is to give you this visual cue that you are at some particular depth in your tree. And it's just like a double check your assumptions continuously because you're right on test. It's a very big convoluted test with 30-40 lines. It's useful to have those 10 modifiers to remind you that, oh, look, all of these conditions have to pass. It's not like a strict enforcement like Solidity will not actually check out that the English condition will match the actual code. But you as a human can more easily catch errors. And for example, this is how we caught errors with our dogs because somebody was writing their condition and saying that, oh, the color should be Alice. But then you look at the test, like the color is Bob. And you know, you might be able to.

Nicholas: Yeah. So it's not that in the modifier. First of all, which which test are you looking at? Which file are you looking at that people might want to look at while we're talking about this?

Paul Razvan Berg: Oh, if you go to save your lab, slash v2 core and search for status of.

Nicholas: Status of. Okay. So this is a unit test?

Paul Razvan Berg: It's an integration test because it uses tokens basically in the back end.

Nicholas: Everything you're saying about modifiers could apply to any of the types of tests or only to integration?

Paul Razvan Berg: Unit integration enforce, yes. But not invariants yet. I have to come up with some tricks for that.

Nicholas: Okay. Because this is kind of my question. Like, wouldn't the ultimate form of this be that the modifier's body is the invariant test? Like so that it is so the compiler does check or the test run does check that it conforms to the modifiers you're putting? Because right now you're using it kind of like a text label. You're grabbing some of the text labels from the dot tree file and you're just putting them in as these sometimes empty modifiers just that they're visually represented in the test file so that you can know what to check. But ideally it would be testing, actually testing that those conditions are met. I do understand what you're saying, which is basically like you accumulate these modifiers, which maybe are only labels currently. But as you accumulate them in your tests, you don't need to check things that you've previously tested because you add them as modifiers, labels in the thing. But do you know what I mean? What I mean about? could they maybe be actually functional in some way? Like could they be invariants? Tests against your conditions atop the tests that run before or after the test runs to check that they're accurate?

Paul Razvan Berg: It's a good question. I do think about them this way. I am strictly using them as text labels now. But in the long term, I guess this sounds like the whole degree of like. we're combining some kind of GPT tool to to check correctness of everything from English spelling to the actual condition being an invariant itself. I guess that's feasible in practice. I do want to give a shout out to this project I was talking about before I found it. It's called a Bull Oak built by Alex Fertel on Twitter. And what he did is you automate a scaffolding of solidity tests by starting from those tree files. So I think in principle, what you can do is something more primitive, but I guess an intermediate step between what you were suggesting and what I have now is to have some kind of CLI that expands of Bull Oak, which checks that the modifiers, they actually match your tree files.

Nicholas: Right.

Paul Razvan Berg: As in exactly if you have like an extra modifier, then it will scream, oh, you're missing this. If you have one missing, you're going to say, oh, you don't have this, you have to put it in.

Nicholas: Just like a nice checklist when you're when you're going in and actually writing, writing your test. How did you spell that person's name?

Paul Razvan Berg: So it's Alex F-E-R-T-E-L.

Nicholas: Are you tight on time or can we go along today?

Paul Razvan Berg: No, I can.

Nicholas: Excellent. Okay.

Paul Razvan Berg: It's a lovely conversation. Yeah, I love it.

Nicholas: Yeah, it's great. I think people are going to enjoy all this. alpha you're sharing about development. I mean, I certainly am. So, okay, we talked a little bit about my harebrained idea about making these modifiers invariants. Do you there is something about how you're doing the invariant testing that you discussed in your conversation with Shafu. There's a YouTube video where Paul talks to what's called the bytecode or something like that. It's a YouTube show where they go through the Sabler contracts pretty much in detail. And a lot of the testing stuff is discussed. If you're enjoying this conversation, that's a YouTube worth watching. And I'll put it in the show notes for the recorded version of this podcast that will go up next week. But there's something you're doing that's interesting about invariants that I didn't 100% catch when I was reviewing it. Can you explain how the invariant testing works in your scheme?

Paul Razvan Berg: It's like we didn't really make it anywhere with the invariants. I can recommend folks to go look on the Pongebook website at the invariants tutorial written by Lucas Manuel. The invariants are these conditions that have to always hold true. And I would say, look, I think what I said on the bytecode show is that we introduced a new kind of contract and the invariants set up. So typically what it has, if you look in the Pongebook, you have your invariants test files, you have your invariant handlers, and that's it. And the handlers are these needleware contracts that sit between your invariant test files, test contract and your actual production contracts. And their role is to check that some conditions pass so that there will be no revert. Because if there's a revert, then you're not actually causing anything meaningful, you're just causing it random. Anyway, so there was that, but then...

Nicholas: The handler piece, I think, was kind of interesting. Like you're not running the invariants. There's something about how Foundry runs invariant tests that you're trying to avoid doing randomly. So you use these handlers to more specifically guide the testing framework to do what you want.

Paul Razvan Berg: Oh, so sure. So sorry, I thought that was common knowledge. Let me then backtrack and explain that bit. So Foundry has this setting where you can activate reverts or deactivate them. And that means that, and by default, I think it's off. When reverts are off, then you don't need handlers. Because what Foundry will do, it will just generate a random sequence of hundreds of calls to your protocol. And it will just execute that sequence and it will not stop even if it bumps into a revert. So you can think of this approach like spinning up a bunch of monkeys, which you put at your computer and then they randomly press keys to try to break your system. You know, it might be the case that they will discover something that you didn't think of or like, you know, click something that will end up into like an actual genuine error state. But more often than not, it's probable that they will basically achieve nothing. They will just click at random and do nothing with your software.

Nicholas: This is different than fuzzing where you're specifying a specific variable. And Foundry is very smart about picking variables that are numbers that have are values that have already shown up in your code or that show up elsewhere in the execution in the tests so that it's very likely to, I don't know, throw zero and other numbers that are likely to cause problems into a specific test. But in variants, you can do something similar, but it's across multiple functions. And it's kind of random if you don't guide it.

Paul Razvan Berg: Precisely, yes. So it's more of like an emergent high-level phenomenon where you orchestrate Foundry to call your entire protocol, your entire smart contracts, not just one function in particular. And you still have, you can still fuzz particular values like amounts, addresses and so on. But those would be fuzzed and passed to a whole bunch of functions. And you have this very important setting, which I was describing before. You can like the monkey at the keyboard approach where you let reverts happen and you don't stop the invariants campaign if reverts happen. But there's the other way in which you activate reverts. And if you activate reverse, then if you hit a revert, then that's a potential candidate for a bug. Because what you do is you're fuzzing your inputs, but then you're saying, oh, for example, in Savior, you cannot have like zero deposit amounts from encrypt strings. So we're going to check that the deposit amount is not zero. Because if it were zero and we were to pass it to the Savior protocol, then we know we would get a revert. But we don't care about that revert because it's going to like revert in the expected way.

Nicholas: You're not talking about bounding the fuzz number. You're talking about... Oh, I am. Oh, you are. Okay. Okay.

Paul Razvan Berg: It's you're like basically bounding the fuzz for all the functions that you want to touch in your protocol.

Nicholas: When you do an invariant, that's what you're effectively doing? Yes.

Paul Razvan Berg: Yes.

Nicholas: Okay. Okay. All right. And so you were going to say about handlers next?

Paul Razvan Berg: Yeah. So the handler is the special contract that performs these bounds and checks and fits between your actual invariant conditions and your production contracts. So in your invariant conditions, you specify the actual mathematical checks you want to make. For example, the total supply has to match the sum of all the account balances. Then the handler will generate this sequence of calls with the appropriate checks. And through the handlers, Foundry will call your protocol. And after every single run, so you run function A, then Foundry will go back to the invariant definition, run all of them. So then it will go...

Nicholas: Handlers execute after every test?

Paul Razvan Berg: Yes.

Nicholas: Okay, I see. Okay. And that is what Foundry gives you handlers. Or that's just something that's in Foundry?

Paul Razvan Berg: The handler is more like an abstract knowledge that you have to be aware of as a developer who wants to use invariants with reverts activated.

Nicholas: I see. Okay.

Paul Razvan Berg: And it's in the Foundry book. If you go look up the Foundry book, the invariants tutorial, this is all explained there. And Sabre, I think it's one of the cleanest implementations of this concept where we have these three or four handlers with an abstract shared contract that implements shared logic and so on. But yeah, what it does is what I said here, that it sits between your invariant definitions and your actual protocol.

Nicholas: Very cool. And that's with reverts on, you said?

Paul Razvan Berg: Yes. If you deactivate the reverts for whatever reason, we don't actually use them. Again, as I said, I like to compare this to putting monkeys on a keyboard and trying out your protocol. They might get something, but very unlikely to do so.

Nicholas: Got it. Because you prefer that the whole production chain be stopped if it reverts on an invariant.

Paul Razvan Berg: Yeah, yes.

Nicholas: Yes. Got it. Okay. Wow. So much. Before we leave testing entirely, I've never done TLA plus or any of the formal verification languages. I'm curious. I don't really have a specific question here. We talked about the Cucumber Gherkin thing, but I'm just curious if you have any thoughts on formal verification about tests. Maybe a different way to ask. this question is, do you think like so far we've talked about the tree technique you've come up with as something like? that's just a good structured best practice when you're sitting down to write tests for a contract you've already written. But I wonder if you can imagine doing it in the opposite direction where or sort of front loading the tree where you write the tree file before you even write the function, the original function in the source contract, rather than just when you get when it's time to write the test. Can you imagine a test driven version of this? or does that just suck all of the fun out of writing Solidity?

Paul Razvan Berg: No, but I've actually done that for a couple of functions. I was so immersed in the test that I was like testing a function and then I was right in there in my test environment. And then I said, hmm, what if I now start with the test tree? and it worked fine. I wouldn't say there was like a major bad. I think I'm pretty ambivalent about like whether all these behavior driven development, test driven development, I think they're basically mostly equivalent. If you're spending a lot of time on testing anyway, which I think you should do. But anyway, so I'm agnostic on that point. But about like formal specification languages like TLA and all of those, I'm not an expert either. I just had a very, I had a skim through the TLA plus page and there's Cucumber, which I had a look at. And there's also K Framework, which by the way, is a very nice integration with Foundry. I think you can for a very basic report, you can get like a simple report just by running your thoughts tests with Foundry. So go have a look at that. It's K Framework. Cool work they've been doing. But also worth mentioning is SMT Tracker. If you guys don't know this, it's probably the lowest hanging fruit in terms of solidity.

Nicholas: I know I have to look at it, but I haven't touched it yet. Can you explain it briefly?

Paul Razvan Berg: Yeah. So it's a built in formal verification system in the solidity language itself. And you can activate it by just turning on some settings in our Foundry config. And what does it, it looks at your, it doesn't even look at your test. It just looks at your contracts themselves. And you do need to have this. like one of the biggest turn offs is that you do need to have asserts and requires and ifs. But like, what, like how we can go about. it is you can add, you know, two or three asserts, then you run an SMT Tracker, and then you get your formal verification report. And you use that as proof that, you know, your protocol works as expected. Although in some cases, it's still helpful to keep your asserts even in production. And so actually, that's a long, that's like a long conversation in itself. But to come back to the point, yeah, so in short, SMT Tracker, it's like a formal verification module built in solidity itself. Now it goes all the way back to what we're talking about the beginning with a solidity team not doing enough marketing about this. Because it's a really, it's a pretty cool feature. Like, you know, we tried to use it at Xavier, like our code was too complicated for it to give us any meaningful output. But for like simple contracts with a few functions, and especially for contracts which do not use the modern features of solidity, which are not very much compatible with this thing. They, like this feature is actually pretty great because you can have, you literally have like a formal verification report given by solidity itself.

Nicholas: That's very cool. I'm gonna have to play with that. Is there any specific feature, modern features of solidity that you know, come to mind that it's incompatible with? Oh, well, Like the custom types or something?

Paul Razvan Berg: Yes. So user defined prototypes for paint work with, what was it? It's like mapping, like map, like math things where you can't using some operations in some context where you're calling a function, like internal function, you can do that because there's like too many functions and as in sugar, can't like resolve them. It's a pretty complicated topic to talk about on a podcast. But let me share my tweet about this with you. And maybe you can put it in the show notes.

Nicholas: Yeah, I found it. I'll put it in the show notes and people can check it out and learn more about this. Maybe it'll become more compatible if people show interest in it. I wonder. One last question on testing in general. Are there things that are hard to define like states, especially for integration tests? Are there states that are hard to define in a tree file? Like, is it possible that there's some interdependence or cyclical, maybe, I don't know, maybe a function that has to be reentrant for a reason or some state that's hard to map out in a bullet point in a dot tree file that this might not cover yet?

Paul Razvan Berg: Excellent question. And there is one wrinkle with this technique, which I'll explain in a bit. But before that, I do want to say that reentrancy is actually not that hard. Reentrancy is just a node in your tree. You're basically saying, when there is reentrancy, then this would happen. There should be a revert or there should not be a revert. And we do have a tree that has reentrancy, actually multiple trees, but one of them worth having a look at is the withdrawal dot tree file in Xavier's code. So reentrancy is relatively easy.

Nicholas: Is that because of the hooks?

Paul Razvan Berg: Oh, yes. So we have the hooks in V2, which enable the possibility of reentrancy.

Nicholas: Which we'll get to in a second.

Paul Razvan Berg: Yeah, but in general, I'm not a big fan of reentrancy guards. I think it's a lazy man's approach to... You basically have to test what happens if there's a reentrancy. Two things can happen. It's just like a null, as in, yeah, it's like reenters the function and just does the same operation twice in a very roundabout way. Or there's a revert. That's what should happen. You should not allow any funky behavior if you're using, you know, tax-effect interactions or what was it the hot topic a few months ago with Frey-Pine function requirements, the effect interactions and protocol invariants at the end.

Nicholas: Oh, I didn't see that.

Paul Razvan Berg: Yeah, there was a article by Brock Elmore from Nascent, the VC fund. He put up this new approach for how do you... In which order you define your... The bits and pieces in your function. And, you know, there's the famous tax-effect interactions model. But he proposed this iteration on that called Frey-Pine, which is function requirements, effects, interactions, protocol invariants at the end. Anyway, so to finally answer a question about what problems there are with... What limitations there are with the three approach. It's what I call... And I have yet to post publicly this issue, but I talked to this with Matt Solomon from the foundry. It's like a foundry dev. And let me look it up because I wrote it there and it was very specific. Okay, so the problem is with what I call static bimodal functions. And these are functions which allow either one of two branches to occur. So imagine you have a function that can be called by either Alice or Bob. And the execution would be exactly the same. So if Alice calls it, the same output. If Bob calls it, the same output. This is something that in principle you can fuzz, as in with the typical fuzzer. But it's overkill because, I mean, you know who are the only accounts which can do that. And it's Alice and Bob. So how do you represent this in the tree? And how do you actually implement it without duplicating a bunch of tree logic? So that is, suppose that the function performs a withdrawal. You want to check that the balances have been updated. There has been a call to the ERC20 contract. Some state changes have occurred where the account, like the internal accounting of the system has been updated. And all those things that have changed. And how do you put these conditions in the tree? There are multiple ways. Like one way is you say when the caller is either Alice or Bob. Just like that. But then how do you not duplicate this in the salinity part? Because there's no cheat code for saying, start prank with Alice and then do it again with Bob.

Nicholas: Right. I guess you could put it in a separate function and then call it with each. Not very pretty.

Paul Razvan Berg: Well, it wouldn't work because you would have two tasks. You won't have the same task with like two accounts. Oh, no, no, wait.

Nicholas: You can maybe have like a helper. You can have like a helper function. Yes, yeah. It's ugly because you have to do it out of the scope of the test.

Paul Razvan Berg: Yeah, it would be much nicer to have like a declarative way, especially in the tree. You know, so we can do this. Or you could say you only test a special case with Alice as the caller and build the rest of the tree assuming Bob is the caller as well. Or do you duplicate both branches and all of their children? So the problem with the first approach I mentioned with either Alice or Bob, it doesn't scale when you have three or four or five parameters. It gets like really ugly in the tree to say when this value is A, B, C, D or E. Same with the implementation salinity. If you're only testing Alice and not Bob and Carol and so on, then you're missing out. Like you're not testing some branches that are like some conditions of the protocol that you would want to have covered. And duplicating the branches is, you know, kind of like messy and like hard to maintain. So this solution to all of this is basically a missing feature in Foundry. I proposed this new cheat code called vm.consider. And if somebody from Foundry will listen to this podcast or somebody from Foundry now, please implement this. It would absolutely help with the tree testing thing. The idea is that you provide... It's like something in between assume and fuzzing.

Nicholas: You give it an array, basically. Yeah, exactly.

Paul Razvan Berg: Exactly. Giving it an array to say, I want you to run this test in a very declarative way with this array as an input two or three times. Basically, the length of the array is how many times you run the test. And with that approach, even the tree becomes easier because you can say when the caller is allowed or is whitelist or whatever. Because you have the vm.consider declaration and you don't have to do any funky helper functions that have change pranks and whatever. So it's a very specialized, in a way like an edge case, but in practice, it's really...

Nicholas: No, it seems like something you have to deal with a lot. You know a handful of things that might happen and you want to test them. I'm curious actually to now know if the Solidity folks are paying attention to Foundry and are at all considering things when developing the language. Because the language wasn't written to write tests for... I mean, of course, implicitly, contracts can call other contracts, etc. It's part of the language. But I wonder if they're paying any attention to what's going on with the Foundry ecosystem and thinking about that also, not just Foundry, thinking about this problem for testing. Oh, they are.

Paul Razvan Berg: As far as I know, they have this progression testing pipeline where every time they make a change or a new release, they prepare for that by running Solidity against a bunch of repos. And increasingly, more repos have become tons of repos. And I think drbmath is one of those effront repos they're testing against. And basically, I think that they are very much aware of this, like the pain points with Foundry and so on. But you're right, Solidity doesn't have a testing environment. And there's a bunch of things they could do for this. I mean, VIER could basically be tailored for testing. They could do something about the coverage problem. I guess some cheat codes could be even... I could envision them being implemented in the language itself. That would be awesome.

Nicholas: Yeah, exactly. Okay, so I think we covered the testing subject pretty well. Now a fork appears in the road and you get to choose where we go first. Do we go to the left to PRB proxy, which will lead us eventually in a roundabout way to the other path we can go directly to now, which is Sablier. Which would you like to talk about first?

Paul Razvan Berg: Well, I have to go in like 30 minutes. So I don't think we can do both.

Nicholas: But maybe let's talk about Sablier and you can just explain PRB proxy in the context of Sablier.

Paul Razvan Berg: Yes. Sounds good. Yes. So Sablier v2 is our most recent iteration of our money streaming protocol. Which is... We are the pioneers of what we call lockup streaming. And this means that you make a one-time deposit in a smart contract and then every second, a small amount of that deposit gets allocated towards the recipient. And the use cases are things like token investing, payroll, grants, and more recently airdrops. Because this model solves this implementation of lockup streaming. Solves the problem of discrete payments and trust between agents on the internet, like on-chain. Where typically what you have to do is one party has to trust the other party to make a payment at some specific point in time. And the other party is excited to do the work or respect some promise or whatever. And the nice thing about streaming is that it completely solves the problem. You create a stream and then just let time itself heal the problem. As in, imagine you saved your grants and the grantee after five days stops responding. They didn't do any work. You can cancel the stream and all you lost is the token stream for a couple of days. And this is like a unit-sensitive model in many ways, which aligns both parties, gives both parties peace of mind that time itself will take care of the arrangement. In practice, we've seen growth with product token investing. So the advantage there is that you just put the tokens in Sibri and then they're streamed for two or three years. They have to think about monthly payouts. They have to think about manually making payments from SAFE. Just put them in our contracts and then the recipient can monitor and see their earnings increase in real time by the second. And yeah, that's how it works.

Nicholas: Yeah, so there's a bunch of different things. Maybe just on the history first. We're short on time, so I'll just summarize. But inspired by a talk you heard from Andrea Santanopoulos, which is pretty cool. I wonder how he's doing. I haven't seen him in a while, but I looked at his Twitter and apparently he's migrated to Patreon fully. So I guess that's why I don't answer him so much anymore. But it's a shame because he was the first person who really explained blockchain to me through the power of YouTube. So that's cool. And then you did a draft EIP ERC 1620 in November 2018. That was kind of like the precursor to the first version of Sablier. Then you did the V1 and now we have this V2. Maybe what's different between V1 and V2? What did you learn along the way?

Paul Razvan Berg: Great question. And by the way, thanks for digging this up. The history was spot on with the time and all that. So V1 was a simple system that didn't have any, basically no bells and whistles. It was linear. The payment rate per second remained the same over time. And the payment stream itself didn't have any capability for integration. It wasn't tokenized. And with V2, at a very high level, what we did is that now every stream is an NFT, which is transferable. This has many benefits. For example, as a recipient, if you want to change your wallet and you have a long-term stream, you can transfer the stream to some cold storage or whatever.

Nicholas: That's great.

Paul Razvan Berg: But also just integrations. We are in touch with various teams. I want to give a shout out to dFrag, which we just announced in partnership with yesterday. They're like an NFT lending protocol.

Nicholas: We had them on the show a long time ago. Oh, nice.

Paul Razvan Berg: Yeah, they're great. And the idea there is that we created a new asset class where you have these on-chain streams that have some value because some part has been streamed, but they continue to be streamed every day. So there comes the interesting question, like, how do you price them? It's going to be a fun exercise for everybody to see how people will do that in practice. It will involve some user education. But anyway, so we have this thing with NFTs that make them transferable, usable, collateral, and all that, which we didn't have in V1, we have now in V2. We have non-linear streaming. And as far as I know, we are the first protocol to build this in DeFi. We call it Lockup Dynamic. So Lockup Linear in V2 is our fresh take on V1. But Lockup Dynamic is a new money lego that gives users the power to create custom streaming curves. So to give you a simple example, consider you want to have backdated investing where the longer you stay with a company, the more tokens you're getting, quadratically or even exponentially. The idea being that people who stay with you for three years, they get not just one more worth, like one more year worth of tokens than somebody who stays with you for two years. They get something additional, you know, 1.2x.

Nicholas: Staying for the third year is where you get the real payout. Yeah.

Paul Razvan Berg: Exactly. Yeah. It's like you increase the total repayment rate such that the longer you stay, the more you get. And so we have that. You can use it for all kinds of payment curves. The contract can actually support any payment curve. And by the way, this uses PRVMath in the backend. So it's pretty cool. Pretty happy about that. You know, you can also use it for things like making like a vesting plan that unlocks every day or every hour. If for whatever reason, you're a legal contract, this way that you can now build that with you too.

Nicholas: Yeah. So there's all these cool lockups that give people... I mean, it's easier if you've seen the graphs which show the unlock rate. It just gives you a clear intuition for it. But there's like a linear vesting schedule. You can do a cliff. So wait a while before it starts unlocking. Those are like the standard lockup. Dynamic lockups, there's like an exponential one, like you mentioned. You could do, as you also implied just now, you can change the time scale basically. So instead of unlocking linearly every second, maybe it's only every two weeks that it unlocks. Sort of like a step, a ladder or stairs, you can imagine visually. So a lot of cool different types of unlocking. One thing you mentioned as a useful reason to use Sabler that I didn't think of in docs, you mentioned that if you send a transaction, like let's say you set up a lockup stream to somebody and you send it to the wrong address, you can revoke it right away and you haven't actually sent all the ETH yet. So that's kind of interesting. You can even imagine people just if they're sort of nervous about sending something, they could do like some kind of cliff stream with a one day delay period and then just send the transactions that way and double check in Etherscan that it looks good before on the Sabler front end before walking away. And okay, so it's like an undo button, almost like Gmail undo.

Paul Razvan Berg: Yeah, I love that.

Nicholas: How expensive, let's just say somebody was to do that. How much are they going to be spending in gas in order to set up? I guess it's like, they got to be issuing an NFT and stuff. There's got to be a fair amount of gas involved in setting up one of these streams, right?

Paul Razvan Berg: It varies between what kind of stream is setting up. So a vanilla lockup linear stream, it's something like 260,000 gas.

Nicholas: Oh, that's great. That's like for reference people like probably an NFT mint on Zora is something like 180,000.

Paul Razvan Berg: Yeah, so it's like a bit of overhead for the streaming accounting on top of the typical NFT. Yeah.

Nicholas: It's like 10x a transaction, 10x just a straight transaction.

Paul Razvan Berg: Yeah, yeah. But for the more advanced curves in Lockup Dynamic, you can expect anything between 300,000 to even 700,000 back in the past.

Nicholas: That's not that bad. I expected millions for the more complex. Because you can do sort of multiple different curves. If you want, I guess those could be maybe even more than 700,000 if you have a really complicated one.

Paul Razvan Berg: Yeah, but in the backend, we're actually using this highly efficient struct with UI in 40s. And so they're like tightly packed. And the increase is not linear for every additional part of the curve you have.

Nicholas: Very cool. I mean, integration is so difficult, but I could imagine on L2s, especially like, oh, you're in your rainbow wallet and you want to send a transaction with an undo button, they could just do it. And you know, what's the difference? It's one penny or two pennies.

Paul Razvan Berg: That's an awesome use case, actually. Doug, the more I think about it, and I've heard the story. Like a long time ago, somebody told me about this. He used to work in TradFi. And he's in like a bank. And he told me how his seniors made a $300 million payout payment to the wrong IBAN. I don't know how they mess up IBANs at those states. But anyway, they did. And it was like a two month long process with lawyers to convince the other party to give the money back in exchange for a reward. And with streaming, you can basically solve this by saying that, look, hey, I'm going to give you the stream for one week. Just make sure that during the first day, you can make a withdrawal for testing. And then you're going to get it all in like seven days.

Nicholas: Yeah, it makes perfect sense. Is there a way to set up a stream where you don't have the opportunity to revoke it? Because that does seem like maybe a danger, especially if you have something like this three year vesting thing. Okay, I'm going to be the artist on your NFT project. It's going to take six months, but the majority of the payout happens in month six. And then month five, you revoke the stream. Can you? I mean, obviously, there's ways to do it because of contracts. But is there something built into the protocol to guarantee to the other end that they'll see in the front end, the recipient can have a guarantee that I can't revoke the stream, for example?

Paul Razvan Berg: Yes. And as far as I know, again, we are the first protocol to offer this feature. We have non-cancellable streams. So when you create a stream as the sender, you can opt in by default, it's cancelable, but you can opt in to make it non-cancellable. And in fact, even after you created it, after three, four months, if as a sender, you say, okay, whatever, I'm going to leave this stream to end on this date. Then you can make it non-cancellable, even during the stream.

Nicholas: Oh, that's very cool. That's very cool. So you can have some agreement like, all right, I'll send you the assets when you make the stream non-cancellable.

Paul Razvan Berg: Yes. And one funny use case here is... Something about use cases is imagine you want to... It's like inheritance. How do you give out your inheritance to your kids so that they don't... Kill you. Oh. Or kill you, right? Like you set up a stream for like 20 years and you make it non-cancellable so that you can't change your mind. But they don't get to have access to your entire wealth from day one.

Nicholas: Wow. It's a wide set of possible applications. Okay, that's cool. Yes. I'm actually very curious. I think it's hard for us to talk about here probably. But I wonder if maybe because Sabley or V2, just reading the docs and taking a cursory glance, but talking to you and the code quality people can expect from you. I think, I feel like maybe the applications here are not even necessarily end user applications so much. Maybe this undo feature is an example where it's like, you don't even necessarily know that it's Sabley or but it's really doing something very efficiently under the hood. I wonder if there are not applications between DeFi protocols or the DeFi protocols might want to use this without necessarily advertising that it's a stream, but just that it provides some guarantee of security that they don't have to build on their own. I think it'd be difficult for us to imagine those examples here. But it does feel like just because the code quality and the gas efficiency, etc. Seems like there might be some value here as infrastructure that's not branded even.

Paul Razvan Berg: And actually, I'm a huge believer in this division of labor between DeFi teams. So my take on this, my belief is that with time, we will see a portion of startups emerge as the backbone protocols, which have stood the test of time in terms of security and so on. And like a long tail of apps, which they either couldn't build a protocol that was safe, or they just saw an opportunity to build on top. Or even more mundane use cases, for example, I have this vision for Sabler, which you can actually power the... You know, be like the on-chain backbone for all kinds of compliance tools for vesting. Like a random example, imagine you're building like a vesting platform in, say, Japan, that complies with Japanese regulations and so on. But you don't really have time and bandwidth to code up your own contracts, or, you know, it's expensive security wise and so on. So what you do is, you know, we make good collaboration with them. And they just use our general purpose protocol, and they put a UR on top. Those users don't even need to know that they're using Sabler. You know, they can be using... All they need to know is that they're using that Japanese platform, which is transparent and on-chain and whatever. But yeah, that... And what I want to mention here very quickly is that we do have also an incentive plan for this. We have broker fees. So when you make an integration with Sabler V2, you can charge a percentage of the deposit made by the user when the stream is created.

Nicholas: Yeah, very cool. I saw there's broker fees and protocol fees. Protocol fees are zero, maximum 10%, but at this point are zero. And for the foreseeable future, it's zero. And then the broker fees, so someone integrating Sabler could charge a little fee for making a front end. It reminds me, it seems to have like... It seems like a parallel would be like the way SAFE has been used by so many startups to build more sophisticated front ends and integrations. But essentially, it's safe under the hood. Feels like Sabler could be like that or even more. Sabler is just a piece of an architecture, but just some useful infrastructure. One thing that comes to mind is the... We didn't talk about it too much, but you have these on-chain SVGs that are reminiscent of the Uniswap V3, ones that are the artwork for the NFTs that are generated when you create a stream. Is that something where people could potentially change the artwork if they do an integration on top of Sabler?

Paul Razvan Berg: At this request, the short answer is currently no. But we are thinking about solutions for this, workarounds. The current implementation is like a global NFT descriptor for everybody. And I mean, the details are still specific to your own stream, what you're going to see, your own token contract address, your particular stream amounts, status, and so on. But no, there's not much customization from that angle of. can you actually put your own SVG.

Nicholas: One area where there is more customization is the hooks. Can you explain how the hooks work?

Paul Razvan Berg: Yeah, that's basically callbacks for stream actions. So you create a stream and then one of the parties can interact with the stream. For example, suppose the stream is canceled. The recipient, if it is a contract, it can eavesdrop on, it can receive an on-chain notification when that cancellation happens. And imagine you are a lending protocol and you have taken ownership of the NFT. You can say, okay, when this stream is canceled, I'm going to liquidate the borrower, like immediately. Or do something with it. Like, you know, you respond to some on-chain action. So it's basically like a wait signal, save your signals to the users, to the other user when a user makes a move on the stream.

Nicholas: So let's say I stake an NFT and I get some stream of assets in exchange for the collateral that I put up. If that stream gets canceled or completed, then it could call some function that would either give me back the NFT or liquidate me and give the NFT to somebody else. You could build a lending protocol, for example, on top of a hook like that. Or a part of a lending protocol, at least.

Paul Razvan Berg: Yeah, precisely.

Nicholas: Very cool. And that's also where the re-entrancy stuff comes in. We should talk about, oh, actually, before we move on, there's also flash loans in it, right?

Paul Razvan Berg: Well, we did implement that, but we didn't actually deploy it. Okay. We decided to just leave it there in the repo. It was audited, but we didn't actually include it in the actual deployed contracts.

Nicholas: Got it. Okay. But maybe someday. So, okay, we should talk about PRB proxy. So what is DS proxy? And then what is PRB proxy? Why is it necessary to have PRB proxy?

Paul Razvan Berg: The benefit is that you give power to EOAs to do delegate calls. EOAs, by default in Ethereum, they cannot do this. And the...

Nicholas: When you say an EOA to do a delegate call, what does that mean exactly? Because an EOA can call up a contract. Yes. But you're saying...

Paul Razvan Berg: Yeah.

Nicholas: Yeah. So what is it that an EOA can't do right now?

Paul Razvan Berg: So imagine you have a complex protocol that has multiple actions required to interact with it. Suppose it requires you to make a deposit. Then it requires you to tweak some parameters, say, you know, how much you want to borrow. It's like a separate function for that. And then finally, you make the borrow from other functions. So you have three functions, right? With the PRB proxy approach, you can write a target function, which performs those three operations in one function.

Nicholas: Okay, so DS proxy from MakerDAO originally? Yes. Allowed MakerDAO to do multiple interactions with different contracts, which was a requirement for staking assets in order to create DAI tokens or something like that?

Paul Razvan Berg: Yeah. So they had a complicated system. I mean, MakerDAO is complicated, right? So they built DS proxy to enable this composition of contracts that I talked about before. But DS proxy was built in 2019 or something with solution 0.5. And, you know, the PRB proxy is a modern implementation. It uses Cray 2. You can pre-compute your address. It's arguably more secure because it doesn't have any storage. Storage of all the target plugins, it's stored in a separate contract that cannot be touched by anyone.

Nicholas: Because that's a huge vulnerability with proxies in general, right? Because if people forget to, what is it? If they forget to instantiate the, the confusion between storage, between the base implementation and the proxies can cause, can allow you to mess around with the memory in order to be able to get the proxy to do things that are unexpected. Is that more or less a description of it?

Paul Razvan Berg: That's for upgradable proxies with forwarding proxies that we're talking about here with PRB proxy and DS proxy. The issue is more mundane. The problem is if you delegate call to a malicious contract, then they mess up with your contract storage. You know, they can do things like potentially self-destruct. Or if you are depending on some particular storage values for some other app, they can, you know, like, you know, they can update that.

Nicholas: Every time you make a delicate call. If your proxy has storage, if your forwarding proxy has storage, then every time you make a delegate call, you're at risk of being, your storage variables being messed with and that causing unexpected behavior. Yes, precisely.

Paul Razvan Berg: And DS proxy has storage. PRB proxy doesn't have any.

Nicholas: Very cool. And it's create too. I didn't realize DS proxy is not create too. So when I was reading about PRB proxy, it kind of reminded me a little bit of the way safe works. Does that make any sense to you?

Paul Razvan Berg: Yes. So safe has a very simple proxy that, in a way, similar to what I did here with PRB proxy. But PRB proxy is much more advanced than safe. So for example, I also have the, this plugin system that lets you react to, like, third-party contracts, which I don't think safe has. I would argue PRB proxy is better tested and has better documentation. But we can, we can quibble about that later. But in terms of features, Yeah, sorry, go ahead. No, no, no. I just want to say that I do want to give props to safe. I think they're an amazing team. I love their product. But we looked at the safe that they built and, you know, we didn't have the features we need with the plugins that we have.

Nicholas: Plugins means that if I, so in a single, I guess I don't really understand what it means to do, to forward multiple delegate calls. It seems to me somehow overlapping with account abstraction, maybe in terms of functionality. It does.

Paul Razvan Berg: It does. It does. So in like 20 years from now, PRB proxy and DS proxy will be obsolete if account abstraction gets implemented.

Nicholas: 20 years. Or whatever.

Paul Razvan Berg: I might have been too pessimistic. Okay.

Nicholas: But basically it lets you from an EOA call multiple contracts in a single call. So kind of, how does this differ from multi-call?

Paul Razvan Berg: Multi-call is only read with PRB proxy can do writes.

Nicholas: Okay, I see. Okay, so this is very, very cool. So PRB proxy is very cool. And maybe the piece that we didn't say out loud is that Savlier is built using, is deployed with PRB proxy.

Paul Razvan Berg: Yeah. If you use the Savlier UI, we make you deploy a proxy at the start.

Nicholas: Awesome. Can you just, as a final point on this, explain what it means to react to third-party contracts?

Paul Razvan Berg: Let's think about the Savlier use case. When you create a stream on the Savlier UI, we use a proxy and the proxy becomes the sender in the Savlier protocol. And because streams can be canceled by recipients, and when a stream is canceled, the protocol refunds the sender the assets, the tokens back that were not streamed. In that case, the proxy receives the tokens as a refund. So you get the problem that the token would end up in the proxy. So what the plugin does...

Nicholas: Because it's message sender?

Paul Razvan Berg: Yes. Yes.

Nicholas: Classic TX origin versus message sender.

Paul Razvan Berg: Exactly. The plugin system that we have, we install a plugin when at the same time, when you deploy the proxy, you can install a plugin. And we install a plugin which implements the Savlier hook on stream canceled. And it detects the cancelation. And it looks how many tokens they receive. And it forwards all them back to the original EOA. It's a beautiful on-chain orchestration that we built there.

Nicholas: Wow. That's very, very cool. That's going to take a minute to fully... But basically, it means that you can write... You can call multiple contracts. You can make multiple delegate calls in a single transaction. And then if those contracts return something to the calling contract, it can be forwarded to your EOA. If you can anticipate what those callbacks should expect before you deploy it. Because it's going to create two addresses. And it's immutable, no storage contract. So you need to know in advance what to anticipate.

Paul Razvan Berg: Exactly. Yes. But you can install a plugin even after you deploy the proxy if you need it.

Nicholas: Wow. How?

Paul Razvan Berg: There's a function for this in the proxy registry.

Nicholas: Okay, amazing. Okay, so we've done the tour of Paul Berg's internet life, except for one subject. The last subject I wanted to ask you about, which is health drugs. What's your... Do you have any quick health drugs people should be trying? Nootropics or whatnot? Yes.

Paul Razvan Berg: As a quick philosophical explanation, I do believe that eventually humans will reach immortality. It will not be through just nature and by... Naturally, we're going to die at 80, 90. We need to do artificial intervention. So people have to get used to that. So my topic for that longevity drug thing is rapamycin and A-carbis. Have a look at ITP, which is Intervention's Decimal Program. I think it's the most rigorous program in the world for longevity drug testing. And rapamycin and A-carbis, which are drugs for... Rapamycin inhibits mTOR and A-carbis lowers your blood sugar after eating. And that combination is probably the most powerful drug combination known today. It can increase lifespan and health span between 20-30%. It's insane if you ask me. If you only take two compounds, you get to live to 100. That's the hope. I could be wrong at first. But just look at the data. It's absolutely unbelievably good. And those two drugs are super cheap. Just need a doctor who is up to speed with this science. But it's a very low effort technique to just, you know, endure life and live longer in a very effective and affordable way.

Nicholas: Amazing. Okay, I'm going to look into both of those. Obviously, this is not longevity advice.

Paul Razvan Berg: Of course. Yeah, absolutely. I'm not a doctor.

Nicholas: But you're a solidity doctor. And I learned a lot here. This was a great conversation. Thank you so much for sharing all this wisdom.

Paul Razvan Berg: Likewise. Thank you for having me. This was absolutely a super fun space.

Nicholas: Awesome. Okay. Thank you, everybody, for coming through to listen. This episode will be up next week on the podcast feed, web3galaxybrain.com. Next week, the people from PartyDao are going to be on the show 5 p.m. Eastern Time, US Eastern Time, 5 p.m. next Friday. And I actually just locked in, I think the following week on Friday, it's going to be Privy.io, who are the creators of the wallet technology behind Friend.tech, the viral app by ZeroX Racer that's going around since yesterday. And they have a very cool thing where you can log in with SMS, Apple ID, or Google. And they create a Shamir wallet, and you hold one of the shards in local storage in a PWA on your Android or iOS device. And eventually, they're going to be upgrading that so that it's held in iCloud, end-to-end secure encrypted storage, or the Google equivalent. So very interesting to be able to have these PWAs with non-custodial wallet shards, at least, with no prior wallet required. And then you can be executing transactions within these apps without flipping between your wallet and the app, just straight up in the website you're interacting. So very cool. Excited for that. Paul, thank you so much for coming through, and hope to talk again soon.

Paul Razvan Berg: Yeah, absolutely. See you, man.

Nicholas: All right. See you, everybody. Bye-bye. Web3 Galaxy Brain airs live most Friday afternoons at 5 p.m. Eastern Time, 2200 UTC, on Twitter Spaces. I look forward to seeing you there.

Show less

Related episodes

Podcast Thumbnail

Konrad Kopp, Co-Founder of Rhinestone

25 October 2023
Podcast Thumbnail

Nazar Ilamanov, creator of Monobase

8 January 2024
Podcast Thumbnail

Juan Blanco, creator of Solidity for VSCode and Nethereum

17 March 2023
Paul Razvan Berg of Sablier