Web3 Galaxy Brain đŸŒŒđŸ§ 

Web3 Galaxy Brain

Chris Chang and George Datskos, Founders of GhostLogs

30 January 2024


Show more


Nicholas: Welcome to Web3 Galaxy Brain. My name is Nicholas. Each week I sit down with some of the brightest people building Web3 to talk about what they're working on right now. My guests today are Chris Chang and George Datskos, founders of Ghostlogs. Ghostlogs is a new platform that allows developers to fork contracts on various EVMs, inject their own custom events and view functions, and then interact with them via RPC, Dune Analytics, or Flipside. In this episode, Chris and George explain how their journey doing MEV on Binance Smart Chain and building NFT loan aggregator Snow Genesis led them to build Ghostlogs. We also discuss EIP-7571, another potentially compatible approach to moving event logs out of transaction execution. It was great getting to talk to Chris and George about the emerging trend separating events from transactions. I hope you enjoy the show. As always, this show is provided as entertainment and does not constitute legal, financial, or tax advice or any form of endorsement or suggestion. Crypto has rights. It has risks, and you alone are responsible for doing your research and making your own decisions. Hey, Chris.

Chris Chang: Hey, Nicholas. How are you?

Nicholas: Good. How are you?

Chris Chang: I'm doing well. Thank you.

George Datskos: Hello, Nicholas. Hey, Chris.

Nicholas: Hey, George. Hey, George. Hey, Chris. Welcome to the show.

George Datskos: Thank you. Thanks for having us. How are you doing, Nicholas?

Nicholas: I'm doing great. I'm excited to talk all about Ghostlogs and everything you've been working on. Awesome.

George Datskos: We're super excited and stoked to share what we've been building as well.

Nicholas: So, I guess just to start off, maybe people aren't familiar with you, but Chris Chang, George Datskos, maybe you could explain a little bit about Ghostlogs. What it is about blockchain that got you excited in the first place? Maybe, George, you want to go first, then we'll go to Chris?

George Datskos: Sure. I can kind of give you my background as well and start from there. Yeah, great. After graduating from college, I moved from Berkeley to Tokyo to work at a Japanese multinational called Fujitsu. If you're not familiar, it's kind of like the IBM of Japan. I spent a few years there working on distributed storage and data processing, and then I worked at a couple of different startups building consumer products, still in Japan at that point. I ended up becoming a tech lead and helped build out the engineering team. Ultimately, those two startups got acquired, and I moved back to the US after about a decade in Japan. I moved back to San Francisco, worked with a friend who was starting a new startup. And then there, I met Chris in San Francisco. He kind of blockchain-pilled me, showed me what's possible. I kind of had not been aware of that world before. I ended up doing MEV with Chris. We scaled our on-chain trading infrastructure to about 150 blockchain nodes. Our edge was mostly based on latency. So, the more nodes we had... the better latency we had. So, we did MEV for a while and then decided, about the same time, together with Chris, we're like, "Well, what if we can contribute more by building actual products that create real value for users?". And both of us have backgrounds in creating and building products. So, it's kind of a good fit. So, that's how we kind of transitioned from doing Web2 to doing MEV, to then doing Web3 products. And the first thing we built was SnowGenesis. But I'll let Chris kind of give us a background before we talk about that.

Chris Chang: Great. Yeah, awesome. Well, great introduction. Thanks, George. So, my background isn't actually a traditional one, as I wasn't always a software engineer. So, initially, I worked as an international student advisor at UC Berkeley, before transitioning into a software engineer role. You know, as a software engineer, I worked at Amazon Web Services on Kinesis, which involved data streaming. After two startups later, I got into crypto and met George in San Francisco, where we collaborated on long tail MEV. And... We have been building products ever since.

Nicholas: That's a great background. I think that's super inspiring to people because so often we just get really, you know, driven into us that you have to go to some elite computer science education in order to get involved in the space or just in general to be really good at building technology products. So, it's interesting to hear your story is not conventional.

Chris Chang: Yeah, absolutely. You know, when I was learning about computer science, you know, as a psychology student, I was like, okay. I could not do this. But I imagine a world where, you know, the skill that I've acquired to allow me to build products similar to blockchain, like seeing the future of what it would be like had I had the skills. And yeah, excited to have this journey and finally land on crypto and be able to build products like Snow Genesis to, you know, create value for users. Wow.

Nicholas: And George, you mentioned just before we jump into Snow Genesis and then GhostLogs. The MEV projects that you worked on having 150 nodes. Why is it advantageous to have so many nodes?

George Datskos: Yeah, sure. So, we were actually doing MEV on the Binance Smart Chain, which was all about MIMPL. So, it's kind of before or during Flashbots, but Flashbots was only on Ethereum mainnet. So, there's no way to pay like a priority auction fee to create bundles. So, everything had to be direct to MIMPL. It was like the most MVP environment you can think of. So, in order to be fast in MEV on those kind of chains. You have to be very quick to, first of all, learn about a transaction and then propagate your transaction. And the more nodes you have, the more likely you are to hear about a transaction first. And the more nodes you have, the more likely you are to be able to propagate and connect to validator centuries.

Nicholas: And so, those nodes are also geographically distributed, I assume?

George Datskos: Exactly.

Nicholas: That's interesting. And in the BSC space, I don't know if people are just familiar with Ethereum before Flashbots. Is that sort of how it is or are there different properties that made it a different experience?

George Datskos: Ethereum before Flashbots. Well, it's probably similar to what Binance Smart Chain was like. Basically, everything was about a priority gas auction. So, if you see people who are doing, for example, sandwiching. We were not, but people who are. They'd have to see a transaction that they want to send something before and then something after. They'd have to send a little bit of a higher gas fee to go first. And then send the same gas fee of the transaction to go after it. And those would all go through the MIMPL. And as you know, priority gas auctions cause a lot of spam. There's unnecessary transaction fees. And I think those are some of the reasons that Flashbots kind of came about. You know, as well as offering the ability to do confidential transactions. But yeah, quite a bit different world from pre-Flashbots to post-Flashbots. But we were only doing MIMI on Binance Smart Chain, which didn't have that concept at all. And it was all MIMPL.

Nicholas: Wow, I guess that must have been, I mean, it had a lot of traffic and a lot of volume. It must have been pretty competitive even relative to other chains.

George Datskos: It was, yeah, very much. There were some very, very strong teams.

Nicholas: Very cool. So, after that, you moved on to doing Snow Genesis, which I think you did a few projects before. Before arriving at Ghost Logs, right?

Chris Chang: Yeah, that's right. So, we done Snow Genesis, which is our first NFT lending aggregator on Ethereum. At the time, really, we wanted to build a product after this NAV PVP world. So, that will provide value for users. And based on our research and personal experience in NFT lending, we realized that the data is so fragmented. That as a lender and a borrower, you had to go through each protocol just to find listings and offers. The thing is that nobody got time for that. So, we decided to take matter into our own hands to build out the first NFT lending aggregator. You know, we grew from just, you know, two of us using the product to now thousands of monthly active users. And just celebrated our first year anniversary last year in November.

Nicholas: Wow, that's great. So, it's an aggregator. Does it also offer the functionality of a protocol for lending NFTs or it's just aggregating existing products?

Chris Chang: Yes. So, for Snow Genesis, we are just data providers that allowing them to have the data aggregator in a centralized place. Yeah, we don't provide execution layer.

Nicholas: Got it. Interesting. Okay, so if people, that won't be the focus of our conversation, but if people are interested, snowgenesis.com can check it out. I guess what's the best application is just if you're trying to take out a loan on an NFT that it might be useful for you?

Chris Chang: Yeah, yeah, definitely. So, if you're a borrower and you want to borrow against your party penguin, for instance, then you'll come to our platform. And essentially, look at the offers from various protocols and then figure out which one you want to do. And you can go to their website directly. And we are unbiased platform, which means that we don't charge anything from anyone. So, we are, yeah, we just provide the data as it is. And no one can like promote their information on our site.

Nicholas: I see. Is there a paid function at all? Or does it generate any revenue?

Chris Chang: No, this is a public good, essentially.

Nicholas: Wow, we got to get you into the optimism round next time or something.

Chris Chang: Thank you. Yeah, we didn't apply it, but yeah, next time.

Nicholas: Sounds great. Okay, and then having built Snow Genesis, did your work lead directly to Ghost Logs? Or were there other things you experimented with in between?

Chris Chang: Yeah, it actually directly influenced Ghost Log, actually. Because while we were building Snow Genesis, we noticed a lot of integration were dependent on missing or incomplete smart contract event logs, right? So that this meant that we had to leverage low-level traces and other techniques to piece the puzzle together. For example, a blend contract, blend by blur, which is NFT lending protocols. They're pretty gas efficient. However, there are definitely events like parameter that are missing. So what we had to do, we had to listen on certain events, like a repay event, and then retroactively figure out when the loan was borrowed, who were the borrower, lenders, amount, etc. And then if there's any renegotiation in between, we also have to piece those puzzle together as well. And finally arriving in this full blockchain data state.

Nicholas: And we realized that- So you would do that by simulating a transaction?

Chris Chang: Yeah, so we actually built our own data pipeline in the indexer to make that happen.

Nicholas: And so you would be reverse engineering the data that was missing, that wasn't emitted from the events that you would need in order to consume or interact with the protocol. And you'd be doing that by running like a local fork? Or how exactly?

George Datskos: So for this particular case, where there was an event being emitted, but it just didn't contain all the parameters that we needed. The alternative is to find the original start. And then we have the repay. And you can kind of reconcile that way. Along the way, you would find all the refinances where the borrower changes. In the first start of the loan, we'll have the borrower and lender set the rate. But those might change along the way. So you have to kind of reconcile it by looking at historical events. You might have to do like an ETH call to get certain parameters at that block when the transaction occurred. And that's what we thought. Wouldn't there be a better way if we can just have exactly the events that we want, if we could modify this contract? Because going back to our MEV days, we did have state overrides that are already possible to do in all the execution clients. They have a way to say, "Okay, what if I could change the balance or the bytecode for this account and then re-execute it?". So at that time, we were not re-simulating them. But we thought, wouldn't it be great if we could?

Nicholas: So I love it. You guys are not afraid of getting into the guts of the prior transaction history. And I suppose that comes from this willingness to do things like MEV, where you're really looking very closely at the chain's history before acting. And the requirements of that. That sort of showed you what's missing in the events that were emitted by protocols that you wanted to interact with.

George Datskos: Yeah, exactly. We were all about data processing on EVM because of our MEV history, as you said. And also, we both have backgrounds in ETL and data processing and building out distributed storage and processing pipelines. So it's kind of our bread and butter. But we also thought that this is kind of a lot of work to get this done. And we realized a lot of people probably have the same problems. And we're like, well, what if there's a better way to do this? And out of that came Ghost Logs.

Nicholas: So if we can just state it simply, what's wrong with events as they are written in protocols or as events are described in the EVM? What's missing from events that needs to be fixed for people doing the kinds of things you were trying to do?

George Datskos: I would say there's a couple of different things. The first thing is that in events, there are no real rules for events. Basically, anybody can write an event in a contract that emits whatever they like. It doesn't actually reflect reality. So to know what actually happened, you have to look at what is actually changing on chain. What state is changing? And typically, there's the social pressure of making sure those actually are correct. But they don't have to be. So certain protocols may have unintentionally written wrong events that don't reflect what actually happened. So being able to go in and reconcile that as a third party is super important. The ability to do that.

Nicholas: Before we jump to the next one, a classic example of that is originally OpenSea's transaction history was dependent on events. And people would deploy NFT contracts that emitted fake, falsified events saying that, I don't know, for example, a famous artist minted this NFT to somebody else. And then now it's on sale for a really low price. Why wouldn't you buy it if it's such a reputable artist when, in fact, that event was fatuous?

George Datskos: Exactly. That's a big problem, especially block explorers. They show those or they used to show them as if they were true, exactly like you said, because they trusted events. You see the transfer from and to, I trust that actually was sent. It wasn't actually just fabricated, like you said. So a big problem like spam and also scams that happen. Because sometimes people trust them. But now people have learned that events are not actually reality. They cannot be trusted.

Nicholas: So there we have examples of like if you're going to be displaying blockchain data to users who maybe aren't quite as deep in the stack. Like if you have a website like OpenSea or something like that, or Snow Genesis, for example, needs to be careful about this kind of thing. You might not want to just directly trust the events that are emitted, even if they are containing all the information you need. They may not be accurate. So that's one thing. And then also the MEV use case where, again, if the if the events are not accurate, you don't want to be trusting them. When making decisions about trading plays, you're going to be making.

George Datskos: Exactly. Very good point. There was an article a couple of years ago about some MEV bots that were kind of basing their simulation results on the transfer events, I think. And then those were actually not correct. They were misleading the bots and they lost a lot of money by not looking at the actual balances.

Nicholas: Scary. All right. I cut you off. You were about to say another reason.

George Datskos: The other reason is, let's say you have the most perfect event schemas. You have exactly. The events that you need for someone to reconstruct the current and past state of your protocol. Even then, there's also the problem of cost, right? Because when I write events into a contract and then people execute on that contract with transactions, they have to pay fees, gas fees to emit those events. So typically we'll see roughly about 5% of transactions. The fee is going towards gas. So like a $20 swap right now on Uniswap. There's about a dollar being spent on gas on gas fees just for the event logs. And if you look over the last three years, it's been. about $600 million has been spent on these logs. Right. And then the people paying for them are not usually the ones that benefit from them. Most people don't even know that they're generating events and paying for them. Right.

Nicholas: I guess we should rewind just in case there's anyone listening who has never written a smart contract or never interacted with one directly. An event is just something that you can. It's like a log that you can emit in the course of a transaction's execution on any EVM. And as we were alluding to previously, the event's content can be decided by or are decided by the author of the contract. And it can be completely arbitrary. You could just say every time you transfer my NFT, we're going to emit the number one. Or you could emit the token ID of the NFT and who the recipient is and who the sender is. Or you could emit any arbitrary thing you want. So this leads to all the sort of. that's the premise for what we're talking about. So you're saying that the this like second or third problem is that the gas burden is put onto. The interactor with the contracts when really maybe the events can't really be trusted and they're not the ones benefiting from them. So maybe there's. there's another place that we could displace the burden for paying for the event emission.

George Datskos: Exactly.

Nicholas: Or even remove it entirely, I suppose.

George Datskos: Right. I think we don't necessarily recommend that everybody start removing all their critical events. But we do think that now people can be empowered to decide what events they want to have on chain and what events they want to have running in an off-chain context, which is not going to incur fees to the end users.

Nicholas: Because I guess the. The nuance there is that the events can be useful. But especially if you are the one authoring the contract and consuming them, then you can trust them. Or I guess if you know the contract, if you take a look at the contract source code and you believe that it's trustworthy, then even if you're not the original author, you can still trust those events. But the events themselves are not available within the context of a transaction to the transaction itself. Right. As far as I know, or at least for the most part. So the events that are being emitted are not stored on chain. They're not being used by subsequent transactions. They're not relevant to your balance. And so far as the contract is aware of it, they're just something that is emitted one time during the transaction and then consumed off-chain. So I guess what would the reason be for keeping events at all? Why should we not just be doing all events off-chain?

George Datskos: That's a really good question. I think there are a couple of reasons. One, you did mention that events are not accessible on chain. While they're not directly accessible to the transaction, I think there are some interesting protocols that use the fact that an event happened and create a ZK proof that that happened. And I think they're called like ZK coprocessors. And you can prove, hey, this was actually emitted at this time because there is something called like a receipt hash. And then you can prove that it occurred. And then you can verify that on-chain and then take an action based on that.

Nicholas: So a little bit early stage, at least. I don't think most people are using events that way today. Definitely.

George Datskos: It's definitely early stage. But it is a little bit of a reason why you'd want to not necessarily remove everything that happened on-chain. The other reason why you might want to keep some events are, you know, some of the infrastructure, like what we're building and the products and tooling, are still very early. So especially if you're building like a token, right, and you take out the transfer event, that might make it very difficult for people to calculate their taxes. You know, especially if you're in the US and you have to record all your capital gains and losses, which is for every transfer, essentially. You're going to need to be able to account for that. So there are very good reasons why you still want to not necessarily remove the critical events, because a lot of the ecosystem kind of relies on them at this point. Right. And we're trying to kind of build in the direction where now you have the option to change. You can choose which events you can keep and which events you can put off-chain while still being verifiable and accessible to people. But it's still early days and we're trying very hard to make that accessible.

Nicholas: I guess ERC20, 721, 1155, do they specify that you must emit an event as well? Or is it just a common practice that people are doing that? I'm not sure.

George Datskos: I believe that's required by the standard.

Nicholas: In the MEV days, you might be modifying contracts in order to emit data that's more convenient for you in your own custom environment. You might be doing some events that are being generated on your local fork in order to take decisions about what transactions you would be executing subsequently. Is that the primary application you see? Or what other kinds of applications do you have in mind for how people might use this?

George Datskos: Yeah, very good point. So I think that some of the main use cases are the people who are consuming these events. are people like analysts, researchers, or anyone who's creating like an index for their dApp. And typically, if the events are not included, they have to, as Chris was talking about, they have to be able to use the dApp. So it's about like, come up with all these like workarounds to figure out what happened on chain. And usually the events are the easiest way to do that, right? So now if you can add them after the fact, anybody can come in and say, okay, in this function, this happened, let me add an event. Then it can query that event directly from. we integrate with like doing a flip side. And you can query it from there. And you can construct the history of that, right? So having access to those new events is like super empowering to consumers of that, of those contracts. Whether you're analyzing, researching. You're building like an index. And now like maybe, for example, Ambient Finance doesn't have any events. But we added swap events to our local fork. And now we have access to all those. And we can reconstruct the state.

Nicholas: Got it. So maybe, could you take me through what, from a dev perspective, how do I actually do ghost logs? How does it work?

Chris Chang: Yeah, I can briefly talk about this. So as a developer or analyst, you control a platform. And you essentially create this thing called ghost fork. And that essentially is a fork. Of the blockchain, mainnet blockchain. And inside of the fork, you can include any contract, any modified contract that you want. We call it a ghost contract. And then inside of the ghost contract, you're able to modify, you know, create new view function, access private variable. Or you can emit event as it's intended. And then after that, you can do simulation based on real transactions. So seeing that, okay, what happened? if this transaction on chain currently is piped into this new contract? replay with this modified contract. And then you can see the new event being generated. And once, you know, our users are happy with the simulation, they're able to go to our fork. And essentially backfill all the historical data with all the modified contracts. So that to generate all the custom guest listings. And then you can see, you know, this is a new event where we, they then can either download CSV file or they can export to flip side or do analytics and start generating the query. Join with their native tables and then essentially publish the dashboard to share with the world. And then that, that is just for the analytical use case. for the operational side where you can imagine that, you know, people who want to build a data pipeline will want to use our RPC or web socket to subscribe to their custom guest listings. And then you can see, you know, the new transaction block coming in and based on the new events, they can take certain action on it. A really cool thing is that all they don't, they don't longer have to do transformation like in TypeScript. They can write everything in Solidity and Viper, which is something that we're super excited about.

Nicholas: That's cool. So what would they be writing in Solidity or Viper?

Chris Chang: Yeah. Because they can just now, instead of listening to our code. Like blend. For instance, we have blend contract. that's missing some parameter, right? And the only emitting loan ID as well as the collection ID, collection address of NFT. And now you have to do, create your own data pipeline based on the certain event and retroactively figure out the missing information and building that data pipeline indexer. But with GhostLog, you no longer have to do that. You just need to go into the actual contract itself, right? And you can write in Solidity to emit that event. And now you don't have to do the crazy transformation in like a language of your choice downstream.

Nicholas: And then in the operational context, if you're subscribing to those logs and using them to trigger, I suppose, subsequent logic and then decide that logic deciding if it should take some action on chain, for example. How would you consume the logs in a operational context? In an ongoing streaming fashion? Yeah.

George Datskos: So we create a ghost fork. So it's basically an RPC that tracks the on-chain state changes, takes all the transactions and reruns them with your new bytecode, right? So your fork basically maps to an RPC node. We'll spin up a typical eth_methods and the WebSocket, make that available. So you can subscribe the same way you do with any other like Alchemy node or your own node. So you can set up a bidirectional WebSocket. And you can say, OK, give me an event. Give me a notification every time that this event is emitted. And then you can take an action, whatever that action is.

Nicholas: Very cool. So basically, to summarize, you go to the Ghost Logs website. You choose a contract that's deployed to, is it Ethereum or on various EVMs today?

George Datskos: Currently, we're on Ethereum mainnet and base.

Nicholas: Great. And I suppose over time, if there's demand, more. Exactly.

George Datskos: Our arbitrage and optimism are coming soon.

Nicholas: Cool. Very cool. And then it's actually interesting that you haven't done BSC. Has the game changed since back in the day?

George Datskos: It's changed quite a bit. And also, one thing I would say is that the chain you might use as a user versus the chain you might use as a trader/MEV bot are different for me personally. And I do feel that a lot of the interesting protocols and apps are these days on L2s and the L2 ecosystem. I think a lot of things are moving or developing there in a very exciting way. For sure.

Nicholas: And being compatible with both Arbitrum and the optimism stack will open that up. Open up many, many doors. Although I suspect the volume on BSC is probably still higher than most of those L2s.

George Datskos: It does still have incredible swap volume. Yeah.

Nicholas: So it's interesting because it does seem like, at least maybe personally, motivationally-wise or directionally for the future, you're more interested in where developer attention is rather than trader attention. Exactly. For the product, at least at this stage. Yeah.

George Datskos: I think there's a lot of interesting new consumer apps coming on board now that kind of infrastructure is either there or developing in a very positive way. And just kind of really excited to kind of help that. And decouple like event emission from gas costs. Because right now they're very coupled. And when you're thinking about what you can emit to show what's going on in your contract, you're kind of restricted by gas costs. And the ability to decouple that, we're very excited about helping make that possible.

Nicholas: I guess if you go service the trading volume direction, maybe historical chains is more for inserting events such that you're able to take MEV style trading actions. But looking forward to all the new kinds of things that are coming out. It may even have a fundamental influence on how the contracts are written in the first place.

George Datskos: That's a very good point. Yeah.

Nicholas: I just want to run through summarizing how it works. So you go to the website, you choose the contract you want to fork, you create new events or modify how the contract works. Are you limited to changing the event emissions or can you change other things about the contract too?

George Datskos: Oh, yeah. So the cool thing is you can also add custom view functions. So when you create a fork and you can create multiple forks that are, for example, I can have like an NFT finance fork that has like NFT-Fi, arcade, blend. And it has kind of their collection. They're collected together because they're similar in what they're doing. And then it can add view functions, right? Like if a view function does not exist or I have to do like five different calls to do one thing, now I can do it in one call, right? And all that logic can happen in the VM itself, which is like super powerful for like dApps, right? Like if you're trying to call a bunch of different methods from a bunch of different contracts, now you can write it all in one place and you get exactly the data you want in the format that you want.

Nicholas: It's very cool. It does seem it sort of echoes the transition of software. From some part of the industry to using Forge and thinking in this solidity centric mindset where you're doing local. I mean, of course, Hardhat and others could do this too. But forking locally and maybe even modifying the contracts such that you can simplify things to make, you know, to customize the contract in such a way that it's convenient to your use case that might not have made any sense for the original developers in the way that they deployed it for gas efficiency or whatever it might be. Yeah, exactly.

George Datskos: Like I think we want to do for like indexing and data what Forge and Founder did for testing. Like the idea that you can run your data pipeline in the same language that the contract was written in is like a super powerful and empowering thing, right? Because you don't have to rewrite and have two versions of something, right? You don't have to have the solidity logic and your math and logic ribbon and elsewhere. It can all be one place and you can make those transforms directly where they're emitted. And like that just makes things a lot easier downstream.

Nicholas: As a more solidity minded developer myself, I completely vibe with this. Although I have heard people who are more comfortable with this. People in the web three front end world describe things like hardhat tests being useful because they can be recycled as front end logic or at least demonstrate front end logic for front end integration ultimately. So I guess the writing and solidity or Viper makes sense if you are intending to do actions on chain. from the perspective of like, for example, MEV case is pretty clear one because you're I mean, maybe maybe you're interacting with it not in solidity, but you're at least spinning up solidity transactions as your primary point of interaction. But I guess if you're creating a front end, then maybe. Yeah. Even being in JS or TS is not an obstacle because that's where you're going to end up eventually. What do you think makes it make sense to be working in solidity? for I suppose it's because you're mutating the contracts themselves.

George Datskos: So while you're not mutating the state, you are mutating your view of that state. And the idea that you can emit new events that were not there before is super powerful for indexers and backends. But also the fact that you can add new view functions means you can move some of your logic that previously was in your front end. Maybe now it can live in the contract fork itself. So even though you're not paying for that on chain. You can spin it up and it'll be there in the ghost work that we spend for you. So maybe your sprinting can be a lot simpler now.

Nicholas: Right, right. And you can as a protocol developer or I don't know if there's a word for this, but the kind of thing you're doing at Snow Genesis, for example, where you're observing other protocols and aggregating them and maybe making these kind of intermediary view functions that simplify the creation of the front end but are not executed on chain. Whatever that in between role might be that can if you understand how the protocol works. Then you can really simplify the front end development or whatever kind of interaction development endpoint you're creating for users or other developers within your organization. So that makes sense.

George Datskos: Exactly. Like I think that our view of what we're building that goes beyond ghost logs. We think of it like the whole ghost ecosystem is that we're trying to build like data availability substrates for blockchains. So making it really, really simple to add observability without incurring costs on chain. So the first part of the journey is ghost logs, which like you were kind of hinting at being an overlay network. on top of that. On top of existing blockchains. And I can plug in those gasless logs and those free view functions and run them in this off chain environment.

Nicholas: So in a way, it's a little bit in the same ballpark as the graph potentially.

George Datskos: I think there's a lot of interesting compatibility there because imagine if you can take a ghost fork that has all your new events that previously you might have to do an ETH call during your transform step. Now you can add them as events. And you can plug that into. Like the graph. And your transform. Your graph transforms can be so much simpler and faster because now you're operating on events that did happen on chain and events that didn't happen on chain but do reflect state transitions. And you'll be able to spin up that index much more efficiently. Great developer experience. And also we think it's going to be a lot faster and to re-index.

Nicholas: Makes a lot of sense. So go to the website. Choose the contract. Fork it. Create this ghost version of the contract. Insert whatever view functions or changes to the events that are emitted that you want. Simulate new transactions. Then if you're happy with it, you can backfill all the old events and either download or integrate with Flipsight or Dune. If I wanted to, I guess it's probably not there yet, of course, but if I wanted to do something like have a subgraph based on this, where is this being run? Is it running on my local machine or it's in the cloud somewhere?

George Datskos: So first of all, as you know, it's been up to RPC for you. And then you can run subgraph yourself. or you can use the decentralized subgraph graph protocol or you can use one of the centralized hosts. Like Goldsky or Alchemy.

Nicholas: So you really can just do this today because you're just generating a regular RPC interface. You can do it today.

George Datskos: You can do it today for some of the hosts. I know that Goldsky lets you put in a custom RPC. I don't know if Alchemy does, but I think eventually they will when they see the power of these kind of off-chain execution environments.

Nicholas: And that RPC, the ghost RPC, is something that you run in the cloud?

George Datskos: Yes, currently it's centralized. So we can ensure some latency requirements, some throughput requirements. We're able to do all sorts of optimizations, especially with how we backfill very efficiently in its scale. But we do see eventually there will be an open source reference implementation that anybody can run to verify that what we're actually outputting is actually correct.

Nicholas: I think for most people's applications, verifying that it's correct or having trust in that it's correct is more important than being able to run it themselves, most likely. I guess unless they reach a certain kind of scale. It's just the efficiency of that is pretty good for testing or at least for getting the product into people's hands quickly. It makes a lot of sense. We almost touched on this topic. And we should return to it, that the existence of things like ghost logs. and there's also this other company, Shadow, that's working in a similar vein of rethinking how events work. That these techniques can ultimately potentially leak out and maybe even excitingly leak out into how the protocols are designed in the first place. Maybe not completely obviating events, but changing how developers think about what they need to put into an event if they know that they can save their users some gas and make their protocol cheaper to use. Maybe even change how people maybe even provide way more convenience functionality for view functions. If they do so off-chain entirely. How do you think it might affect how protocols are designed in the future?

George Datskos: Yeah, I'd say, first of all, I think it's great to have this ERC-7571. It's kind of a starting point for the community and ecosystem to align on what it means to have this off-chain set of events, these off-chain and view functions. Exactly. and then specify them in a compatible way, right? Where all the tooling can benefit from and there can be like a consistent way people can specify those and then those can be consumed, right? So it makes a lot of sense to have one way of doing that. So I'm excited that Shadow has done some of the work there to help start that process.

Nicholas: Can you describe 7571 a little bit for folks who aren't familiar?

George Datskos: Sure. So the core idea is that you want to have something that's backwards compatible while signaling as the protocol creator what you'd like to have emitted in this off-chain world. So the way they have it right now is you can go in and you can create a off-chain block. They call it a shadow block. We're not too stoked on the naming of enshrining your product name into the syntax, but we do hope that they will consider calling it an off-chain block or a gasless block instead of a shadow block. And in there, you would put all sorts of logic. For example, they could be view functions, they could be events, and also the calculations and calls necessary to construct the parameters that you want to emit. Because if we think about event emission, sometimes you have to calculate something to emit something, right? Maybe you're calculating the debt of a loan that was repaid and that was not previously accessible to a specific function. Maybe it was elsewhere. Maybe now you calculate it. So you have a block that does all sorts of logic. You can even do external call to, for example, a chain link oracle, get the current price of an asset, and then emit all that. So then existing nodes, will not be impacted because these blocks will be, in the first version of it, compiled as comments, compiled out. So then the bytecode that is emitted and submitted on-chain will be consistent with what would be emitted if these blocks were not even there. And then because the comments are there in the source, when that source is shared, either through the IPFS or through Etherscan, nodes and indexers that care about this can fetch it, and then they can run those off-chain blocks.

Nicholas: That's interesting. So this is more, so 7571-more addresses the latter example that we were discussing about it really having a large influence on how protocols are designed for gas efficiency and supplementary convenience event emission. so that you put all of the events that you might want to, someone might want to have access to to interact or understand your protocol in the comments, but you don't ever emit any of them. But it's still from the perspective of more of the original author of the contract. It's not so much about adding things subsequent to the deployment. Exactly.

George Datskos: I think there's some benefits there to signaling benefits, right? The contract author and set of people who created it are typically in one of the best positions to surface what they think is most important. Maybe not critical enough to be on-chain, but maybe important enough to represent certain state changes. And then signaling that, and then people being able to bootstrap on top of that is pretty powerful. But as you were starting to mention, the ability for other people to also do that, like a third party, is also very interesting.

Nicholas: Yeah. There's really no, there's no like shelling point, metatextual shelling point for talking about contracts. yet, it seems. There's no canonical other place to see what other people have said or thought or commented or added to a given contract. It seems like that could be a useful addition to just having the hashed, you know, verifiable source code for the contract itself from the original author.

George Datskos: I think it's a great starting point to be able to share what the original author had. But like things like sharing and sharing and discovery are going to be very, very, very important for something like this to really take off. Because then you can kind of have the same thing that platforms like Doon and FlipSight have for data analysis, you can now have for protocols themselves. You can look at what other people have written, and maybe there's a trust in people in the community that say, "Okay, here's my version of, you know, Aave Bay 3," has all these extra events that I think are important. And then you can bootstrap on top of those and you can build your products and you can take those forks that people have built and you can import them into your own fork and then maybe add changes on top of it. But like having ability to share and discover, we think we're very excited about releasing this soon.

Nicholas: - Yeah, what's the state of the art with that right now? I guess subgraphs are one way that people do that, Doon is another, but I suppose the queries themselves are visible even if the product itself is centralized. So you can build on top of other people's ideas there in some ways. What do you think of as the kind of ways to do this today?

George Datskos: - So yeah, I think that those are like great ways that people are already doing them, sharing source code, sharing queries on like Doon, sharing the subgraphs on platforms like what we have. Currently, everything is kind of siloed to a specific customer. So I come in and my forks are private to me, but very soon we're gonna open up the option to make those public, right? And I can make them public in the context of a single fork or multiple forks. And then a fork will have all these different contracts that will have all my edits. And then the community will be able to see those, maybe rate them, maybe say, okay, I really like this. And you can start creating kind of like social layer on top of it because not everybody, maybe wants to look at every single detail, but if they can say, okay, hey, I trust these people. These people have written these great edits. I can then import them into my fork or I can use their fork directly. Then you start to create kind of a ecosystem around it.

Nicholas: - It's interesting to think it might actually be more timely to do the complimentary to 7571 and instead go backwards from the consumption of the protocols. Rather, I mean, it might take longer to convince people to change how they write their protocols in the first place than it would take to convince people like MEV traders or interface builders who want convenient ways to write their own little view function, solidity view function, but in a way that's not tied to a specific thing. And maybe there, you could imagine some additional file that you import or library or something in a local fork that's not even specific to ghost logs, but instead something that people could just be trading in order to do exactly what you're talking about. I guess, because today, most of them are pretty specific to the platform or the architecture, be it the graph or Dune or Flipside. It would be cool to see people sharing, you know, new ways of consuming existing protocols, especially really complicated ones like Blur or Blend, you were mentioning.

George Datskos: - Oh yeah, absolutely. Because as you said, like this doesn't require the initial developers to do it. Anyone in the community can come up with their own view of Blur or Blend and then share that. And then you can, like you said, go backwards and then maybe meet in the middle as 7571 comes from the protocol route. And then we come from this kind of a creator route and then eventually everything is possible to do either on-chain or off-chain.

Nicholas: - All right, I said Blur, but I meant C port. But yeah, there's so many protocols where it requires, or like I remember working on Juicebox protocol that in order to retrieve the data, in order to pass an argument that has several complicated structs, you need to go traverse so many contracts in order to get that data. If someone could just write a single function in order to do that for you, that would simplify the life of everybody thereafter.

George Datskos: - Very good point. I was also kind of looking at the very original C port, like the Wyvern contract even emissions. And they were very, very tricky to figure out what actually happened. Being able to edit the code was kind of the most ergonomic way of doing that. And then being able to share that.

Nicholas: You can imagine people like, I don't know, I'm thinking like Reservoir or other people who provide aggregating indexing services or even Snow Genesis for looking at these things might prefer to do a public goods version where sure it's useful for them, but why not let other people consume it? It's not really a moat to be able to, or it ought not at least be a moat to be able to understand how protocols work, even though maybe in some cases it is.

George Datskos: - I absolutely agree that like the public goods version of a lot of these protocols, it will be super impactful.

Nicholas: - Yeah, you could also imagine if there was a standardized kind of way to do this, that the protocols themselves might even subsidize grants or other things in order to get it, to make their protocols more legible. And it is also nice to, because it does seem like in the current state of affairs, there is this awkwardness in the smart contract. authoring where you're writing a line between legibility makes it easier to audit, easier to understand, and more accessible for extensions to be built on top of, but at the expense of gas and maybe even just design efficiency, if you really understand how it works, you don't really need it to be legible for your own purposes.

Chris Chang: - Yeah, absolutely. So as you can imagine that, blockchain is kind of a smart contract, it's kind of unforgiving, where before deployment, you actually have to figure out what event logs you want to emit. And once it's deployed, it's done, unless you're using proxy contract, of course. But you can imagine that you can't really know every future analytical need, unless you have a specific protocol. And so GoSlot provide you an alternative way to like retroactively add new events as you see fit. And I think this is super powerful as, you know, in comparison to Web2, you know, where if you miss some front-end tracking, back-end tracking, you just add them. It's not a big deal. And however, blockchain is not the case.

Nicholas: - Yeah, you're totally right. It reminds me of, there's this ERC4906, which is a 721 metadata, metadata update extension. I think that's the right one, where you can emit an event if the metadata changes in your NFT contract, in order to queue like a refresh of the cache for OpenSea's metadata caching, for example. But the problem is that the event needs to be in the 721 contract itself. Your 721 NFT needs to implement it. So all prior NFTs cannot do this, because it's over. So something like a standard around how to augment the legibility for read-only applications, like ghost logs, especially if standardized in a way where it's something people can really feel that they're contributing to a public good by annotating them in this way, could make something like 4906 achievable for even the whole history of NFT contracts.

George Datskos: - Yeah, I like that a lot. Like the idea of like a retroactive 4906 annotated at the metadata update event.

Nicholas: - Would be great. And someone like OpenSea would love it, because they don't need to go and, you know, refresh their cache over everything, even if nothing has changed. If a developer or even just a friendly person in the community augments a contract retroactively with a 4906 post-facto ghost, whatever you call it, ghost fork, then it even reduces maybe the computational load on them. If they can get some assurance that it's an accurate implementation, then it maybe even makes their backend more efficient. Would be interesting. So lots of interesting applications of this stuff. I guess we sort of touched on it, but one question I had was how devs should think about non-Dev non-standard, non-EIP modifications to how Solidity and the EVM work. I mean, we talked about 7571, which is in progress official way of going about things. But I mean, in practice, devs are always doing like local forks of things. They're messing with libraries that they're importing from OpenZeppelin or whomever, because it's missing something they need, or in some other way is not ergonomic. Maybe those are things that they're not gonna ship in production, but they're very useful for testing or understanding, or, you know, or building some part of their stack. I'm curious if you just have any thoughts on how Ghost Logs plays or sits in this field of like the praxis of mucking about in the EVM and trying to get things done versus the kind of standardized, you should emit an event with all the data that you need, but you're locked into it once the contract is deployed. I don't know if you have any thoughts on that.

George Datskos: - It's a good question. I think, yeah, I think if you look beyond standardization, you know, it's very free. what you can do, especially with these things like Ghost Forks, you can add events any way that you see fit. I think, yeah, without standardization, it's kind of like a free for all. We don't necessarily have specific recommendations. beyond, we think it's not a good idea to remove events in your Ghost Fork, because then it can make, it can be really challenging, you know, because you no longer have the main net changes plus your changes. So we typically recommend to add, to be additive, right? To add new events instead of the modifying existing ones. That's kind of the main thing that we recommend. anyone who's kind of messing with events.

Nicholas: - Do you think that this is, to me, it seems like maybe there is a ideological battle about doing things like ghost events or off-chain events, because people are exactly afraid of the non-standardization or making it difficult to understand what you're looking at because each contract will no longer be abiding by the kind of conventions that we've been familiar with. But it also seems like it's maybe evidence that the EVM, Solidity and the chains that use these things are maturing and that it's time for more efficient ways of interacting with the blockchain. Is that, does that resonate with you? -

George Datskos: Yeah, it's like a very interesting thing because like, if you look at like the history of, for example, JavaScript, which may not be everyone's favorite language, but is now kind of the defacto language for web, you know, built in like, I think what, like 10 days, the original language. And then now it's like everywhere, right? All the browsers have it. So even if you may not like it, you can still use it. Like it's kind of what people have to use. And, you know, maybe people don't love Solidity and Viper, but it is the defacto language for EVM. Although I am seeing like some interesting developments from like, you know, Arbitrum and Stylus and being able to write your contracts in any language that you like, that has Wasm support, compile it down to this very efficient runtime. And this, I do see this as kind of being kind of maybe the next iteration of on-chain, much cheaper to do, to do logic, to run execution. Although I, I don't know what will happen with like L1s, if they're going to have this option, probably not for a very long time, but some of these like L2s will give you, you know, new ways of thinking about things and new ways of writing contracts. But yeah, they're just very exciting times. And we kind of want to be able to support people to build things, whether or not they initially they thought about it, retroactively be able to add new events and also kind of empower them to do things efficiently with the tools that currently exist while also keeping an eye on these new tools.

Nicholas: - Yeah, absolutely. It's an interesting pressure because you would think something like maybe like compiling to Wasm to deploy on-chain would be something that the most elite developers in the ecosystem would be most interested in. who are the most experienced in other languages and other kinds of concepts that are not available in Solidity or Viper. But the non-availability of that most likely for the longest time on something like EVM also means that, or on Ethereum, I should say, is maybe, you know, the most experienced developers who are writing the most, mission-critical code are maybe writing their protocols for Ethereum also, or even primarily. And so to, maybe it leaves these alternate language options for deploying code to blockchains in an awkward position right now, because the developers who might be most likely to use them will not have the access to them if they're writing for Ethereum. And only these kind of, at least right now, fanciful, less well-trod chains have these functionalities available. But at the same time, it also seems like we started off with rollups being a ZK design simplified to the optimistic design. And then we've been in this period for the last year or so of the explosion of the popularity of optimistic rollups, which are sort of easy to use because they are fully EVM equivalent. But in the long stretch of time, it seems obvious that the coolest things you can do with L2s would be to not be anything like the EVM. And simultaneously we have EigenLayer proposing, a world in which maybe the EVM and the security affordances of the stake for the ETH stake are separated such that maybe you have a completely different virtual machine than the EVM, but backed by the same kind of security model potentially. So a lot of interesting things going on with the alternate possibilities and forking futures.

George Datskos: - Yeah, like I think the network effects of things like Solidity and EVM are just so powerful. But like you said, if you're building something that needs to be multi-chain, right? It needs to run, like especially in this kind of new L2/multi-chain world, then you either have to have two versions of it, right? Which introduce all sorts of other issues, or you need to just write it in the defacto language, right? But perhaps for new apps or new protocols that are only targeting these new VMs, then I see those as being kind of the fresh upstarts that will choose whatever language or whatever framework works best for them.

Nicholas: - Yeah, it does feel like it's possible that Solidity is JavaScript for this, It's believable, at least. But that kind of brings me to another question, which is, I mean, we've talked only about the EVM, but there's been a lot of buzz about Solana lately. There's all kinds of other chains and other languages, different L2 configurations, Celestia, the Eigen stuff. Do you ever think about those things, or do you really feel like the network effect that is the surest bet is really the EVM?

George Datskos: - I do think about them like on a personal level, just I like to explore these other chains. I think it's good for us to kind of focus on at least just one thing for now, something we can do really, really well, make sure we deliver and create actual value before we think about expanding those.

Nicholas: - We ran the great gamut around Ghost Logs, but are there other applications or other interesting things that you're working on with Ghost Logs or way people might consume it or think about using it that we haven't covered that you think would be interesting for the audience to hear about?

George Datskos: - So basically, as we know, Ghost Logs is kind of like the underlying infra, right, that you can change. You can change events and you can change bytecode to run in this off-chain environment. But the things we're kind of looking towards now are improving that, but also building some tooling, some stuff on top of it. This is things like Ghost Query, right? Ghost Query is a way to query your off-chain events alongside your on-chain events. So integrate with platforms like Doon and Flipside to make that possible. And the second thing we're exploring right now is like Ghost Graph. So we're already kind of excited about the initial experimentation here, super early days, but it's like, oh, we're gonna do this. Our take on what would subgraph look like if it were invented today, right? So like in a post-Ghost Logs world, what if you could transform your on-chain plus off-chain events into new entities that you could then query through GraphQL, but like unrestricted from having to do things like ETH calls during your transform step. So imagine like a very ergonomic way of doing that right now.

Nicholas: - That is interesting. And I know people, as much as they love the graph, also kind of struggle with getting it indexing, writing subgraphs, et cetera. It's a whole extra step involved in making a production protocol or a smart contract. Do you think this would like reduce the burden to write a subgraph and do all those steps?

George Datskos: - I think it'll make things a lot simpler. I think the developer experience can be just a lot tighter when you can use events, which are already being used as kind of the baseline for transforms. If you can then put more logic into that step, then the step that comes after that in the subgraph logic can be so much simpler and tighter that- - You can get up to speed a lot faster. I think as all these consumer apps kind of come on board, they need to have a backend powered by an index that they can just develop very quickly and its performance that we can help empower those use cases. Like instant backends for DApps is something that we are very, very excited about.

Nicholas: - Yeah, that sounds great. I'm at the end of my list of questions. Chris, I don't know if there was anything else we didn't talk about. you thought we should cover or any interesting elements that we didn't mention?

Chris Chang: - No, I think we have covered a ton. Thank you so much.

Nicholas: - Yeah, you're welcome. George, I asked you the same question. I don't know if there's anything we didn't cover.

George Datskos: - I think it was great. Like, yeah, super happy with this interview. Thanks a lot for taking the time to talk with us. I think like you have a lot of really great points and it's great to kind of discuss it and show what we're doing.

Nicholas: - Yeah, it's I'm sure gonna be popular with listeners who are curious about what's going on with all this new thinking around off-chain events. If people wanna learn more about Ghost Logs, where should they go?

George Datskos: - So you can go to ghostlogs.xyz. Also we have docs.ghostlogs.xyz for the technical documentation.

Nicholas: - Great, and where are you co-located? Do you two live in the same area?

George Datskos: - We are both in Miami.

Nicholas: - In Miami, okay, Miami. That's interesting. And is Ghost Logs just the two of you for now or are you scaling up?

Chris Chang: - Currently we have just two of us. - Sorry, just go ahead. - Just the two of us. Yeah, we've been working for the past few years now together on various products, yeah.

Nicholas: - Awesome, I don't know, at least to my knowledge, I don't think anyone on the show has had a startup in Miami yet. So that's interesting. What's the scene like there these days?

Chris Chang: - You know, we moved about two years ago from San Francisco and at the time it was, we thought, well, FTX happened. So that was kind of like a downturn, but we ended up meeting a ton of crypto entrepreneurs down here and we have pretty tight group, which is nice. And we all play paddles together.

Nicholas: - What's paddles?

Chris Chang: - Paddle is this like a thing between a sport between like tennis and pickleball.

Nicholas: - Oh, wow, okay, in between it.

Nicholas: - Wow, okay, next time I'm in Miami, you're gonna have to show me how to play.

Chris Chang: - Oh, absolutely.

Nicholas: - All right, Chris, George, this was wonderful. Folks, if you're interested, ghostlogs.xyz. Thank you so much for coming through and thank you for everyone for coming to listen. Talk to you next week, thanks.

Chris Chang: - Thank you, Nicholas.

George Datskos: - Thank you, Nicholas.

Nicholas: - Bye-bye.

Show less
Chris Chang and George Datskos, Founders of GhostLogs