Web3 Galaxy Brain đŸŒŒđŸ§ 

Subscribe
Web3 Galaxy Brain

Cassandra Heart, Founder of Quilibrium

19 June 2024

Summary

Show more

Transcript

Cassandra Heart: What is acceptable discourse today is not necessarily acceptable discourse tomorrow. And so in order to resolve that, you have to remove the element that causes that public pressure from even being successful. What equilibrium does in order to provide censorship resistance is it gives greater anonymity, but also makes it possible so that anybody who's participating in the network, if there is content that they wish to host and ensure that it stays up, they can continue to do so. Nothing we do is that new. We're just doing things in new ways. Ethereum has some strong lessons learned. Solana has some strong lessons learned. Ultimately, by consequence, we've been able to take those lessons learned and evolve past what those protocols are kind of stuck in, in terms of technical debt. We just simply don't do things in the way that a lot of investors want. They want tokens. They want liquidity. They want us to betray all of the core values of what Q is trying to do. And we just refuse to compromise.

Nicholas: Welcome to Web3 Galaxy Brain. My name is Nicholas. Each week, I sit down with some of the brightest people building Web3. To talk about what they're working on right now. My guest today is Cassandra Hart, founder of Quilibrium and part-time contributor at Merkle Manufactory, creators of Farcaster. On today's episode, Cassie and I dive into Quilibrium, a decentralized platform as a service protocol that aims to enable developers to store data and run uncensorable apps. We discuss Cassie's extensive background building cryptography products and how this experience and the myriad cryptographic developments since the advance of Bitcoin and Ethereum enable a new type of world computer. We look at various technical aspects of Quilibrium's architecture and zoom out to see how these pieces create a substantially new virtual medium for software. It was great getting to know more about Quilibrium from its benevolent dictator, Cassie. I hope you enjoy the show. As always, this show is provided as entertainment and does not constitute legal, financial or tax advice or any form of endorsement or suggestion. Crypto has risks and you alone are responsible for doing your research and making your own decisions. Cassie, welcome to Web3 Galaxy Brain.

Cassandra Heart: Thanks for having me.

Nicholas: Absolutely. I'm excited to talk about Quilibrium today. It's a very exciting, exciting subject. Maybe to begin with, what is Quilibrium's mission?

Cassandra Heart: Quilibrium's mission is to secure every bit that traverses over the web. So whether that's on the Quilibrium network itself or a VPN that somebody's using, or if somebody's interacting with another chain, or even if somebody's interacting with the clear web, our goal is to secure every bit that goes over the web. And that's a pretty big, hairy, audacious goal, but we believe we'll get there.

Nicholas: Yeah, that's amazing. I read in some of your documentation, that Quilibrium should become the base fabric of the internet. Maybe, could you explain a little bit what that means relative to the traditional kind of cloud infrastructure that the web relies on today?

Cassandra Heart: Yeah. So it's kind of a remarkable shift that's happened over the past couple decades. We saw the emergence of AWS and you have a lot of the traditional guard, like back in the day when people were manually operating their own data centers, staffing teams to run all that. The whole reason they did this was because they had a lot of resources that they needed to put in a data center and paying somebody else to do it, maybe you could get a third-party agreement with another data center, but there's still an extension of trust. And so a lot of those people, when the advent of AWS and the like came along, were saying, "This is someone else's computer. This is someone else's machine. Cloud is just someone else's machine. Why would I trust that?". And so Amazon spent a lot of time trying to build up that trust. And by consequence, they've become quite the behemoth. The grand majority of the internet's traffic is served through cloud providers. And so it creates an interesting problem because in several countries like the United States, there is this phenomenon that the federal government, through their various arms, in the event that they want access to something, they can subpoena it, or they can do one of those court actions, which is completely secret. You don't know about it. And they can coerce third parties to hand over data, hand over information, hand over raw machines. And you may never be any the wiser. And so because of these revelations by Edward Snowden and a whole bunch of other folks along the way, we've started to realize we're extending a lot of trust and blind trust at that in these providers. And so the problem with this, of course, is that while most people are not never going to be subject to some sort of scrutiny for national security, or at least they don't think they will be, the reality is different. They actually are. A lot of people end up having their data pulled in through these various dragnets that happen. And this is far more often in some other countries like China, for example. Obviously, everybody is very, very under scrutiny in China because of the way the CCP operates. But even in the U.S., a lot of people who are law-abiding citizens who have done nothing wrong have ended up with their data being suctioned into the mass a big problem. And I think that's a big problem. And I think that's a big problem. And that creates a very interesting problem. The problem is essentially that while most people believe that they have nothing to hide, because most people are genuinely good and they really don't have anything to hide, it doesn't matter because metadata can be frequently seen in ways that projects a bad light on someone. And so by consequence, you could be labeled as a bad actor just by virtue of pinging a cell phone tower at the time there was a protest. Or, participating in a completely lawful protest, even. There are lots of reasons why people feel like they have nothing to hide who actually should be very concerned about the way that their data is being collected and managed. And so trying to fight against this overall state apparatus is something that unfortunately kind of, like, harkens back to the phrase, I'm sure you've probably heard this before, but to bake an apple pie from scratch, first you must reinvent the universe. And very similarly to our mission, to reinvent the internet in a way that is secure, you must actually reinvent the entire internet.

Nicholas: So I'm hearing privacy, also censorship. resistance is an element that's important about Quilibrium, right?

Cassandra Heart: That's correct, yes.

Nicholas: Not only, yeah, the privacy of actors to be able to express their free speech or congregate as they will, use the internet as they prefer, but also to be resistant to takedowns, which I know. in some of your talks, you talk about Discord, Cloudflare, other kinds of cloud, you know, sort of platforms as a service and applications limiting the free speech of people on the internet, even when what they're talking about is legal.

Cassandra Heart: Yeah. And that is unfortunately a really strange phenomenon. Like a lot of these social media companies, you'd think they'd have an interest in preserving the freedom of speech of their users, but in reality, it actually ends up being kind of a struggle. A lot of public companies, we saw this with Twitter until Elon Musk took over the company, a lot of public companies end up facing pressure from shareholders and the general public alike, and that can obviously influence the overall value of the stock for that company, to suppress things that the general public may consider harmful. The problem with just submitting to pure pressure, rather than actual legal pressure that is justifiable about what is acceptable discourse, is you end up with this condition that you are subject to the tyranny of the masses. And what is acceptable discourse today, if you read many rationalists' writings, you'll find like this is a phenomenon that has existed since time immemorial, is that what is acceptable discourse today is not necessarily acceptable discourse tomorrow. And so in order to resolve that, you have to remove the element that causes that public pressure from even being successful. And so what equilibrium does in order to provide censorship resistance is it gives greater anonymity. But also makes it possible so that anybody who's participating in the network, if there is content that they wish to host and ensure that it stays up, they can continue to do so. And identifying them is essentially a very hard thing. So there's a lot of different like layered solutions that come into play. But the ultimate goal, yes, to provide censorship resistance and very extreme privacy.

Nicholas: Got it. So we'll get into some of the architecture stuff, which is very deep, and we'll only manage to scratch the surface, I'm sure. But you point to this, interesting thing, which is that social media companies at the application layer, nevertheless, act as a kind of infrastructure for communications and are beholden to the advertising companies or the companies that choose to advertise on their platforms, who can withhold advertising. And so it's not. even though social media are sort of utility or can be thought of as a kind of utility at times, depending on their scale, they're nevertheless subject to market forces where they're forced to act more like companies like, I don't know, the Gap or Apple or something, rather than like, infrastructure providers. So they're very concerned about their image and what's trafficked on their networks, even though they seek to also be treated as a kind of utility. Nevertheless, they're censoring things as if they were like an American brand in the mall.

Cassandra Heart: Yeah. And it gets actually a little bit more complicated than that. It's not just advertising pressure. A great example of this, and it's a rather controversial example, so I'm going to drop this little spicy factoid in here, is that like, in the 2020 political, there was a burgeoning social media empire being built, very in the wild and obviously had a very specific slant associated with it. It was called Parler. A lot of people probably know it as right-wing Twitter or right-wing Facebook or some kind of combination of the two. And Parler ended up getting unplugged by Amazon Web Services. Amazon Web Services made a claim that there was a credible threat that was coming from Parler. But later, later data and studies that were conducted actually revealed that the incident on January 6th was actually coordinated largely on Facebook. And so the justifications about why they had deplatformed this particular service was ultimately a political ploy. And so being subject to censorship due to political dynamics is already a very unacceptable position to be in. I may have my own personal beliefs. I may have certain viewpoints. That'll conflict with people on any side of the political spectrum. But I will very much admit that in the United States, freedom of speech is a very powerful force for equalizing points of view in terms of whether or not they can actually be said. And because companies are going beyond that measure, it's actually a credible threat from every step of the level. It's not just advertisers will pull the plug, like what happened with Twitter when Elon Musk said the infamous go fuck yourself. It's actually all the way out to. even infrastructure providers will pull the plug if you become politically inconvenient and potentially scar their reputation.

Nicholas: Right. And they're subject to pressures also like, I don't know, I think Cloudflare was pressured to take some sites or remove their protection services provided to some sites based on the content of those sites because of this kind of public pressure on them, politically oriented public pressure, even though there were many other layers of the stack at which those sites could have been removed, right? That's correct.

Cassandra Heart: Yes.

Nicholas: So we're talking about two things here already. Firstly, I mean, decentralization is kind of the underlying subject here, but we're talking about both privacy of communications and censorship resistant storage and serving of data. Are there any other elements that are like core to the equilibrium mission before we dive into some more of the details?

Cassandra Heart: Yeah. So you can kind of think of the network as a box of Legos. There are certain components that the way they can be mixed and matched matter important or matter in an important way, and we're talking about data that we can use for certain types of applications. We have this approach for providing group level privacy and communications. Not everything needs that. Sometimes it's just direct peer to peer communications and you don't need that particular component. Sometimes you need not long term data storage, but ephemeral data storage. Like for example, this particular conversation is happening over streaming video. You may wish to choose to record it like you are, but that doesn't necessarily mean that every single video call would be recorded. And so the ephemerality of streaming data is one other Lego piece that might not matter to every application. So we try to have as big of a Lego box as possible with enough strong opinions baked into those Lego pieces so that when they are composed, they're composed in a way that doesn't violate any sort of principles of the network, that people remain private, that people remain censorship resistant in whatever the ways that those are being composed.

Nicholas: Got it. Now, philosophically, how does equilibrium differ, from the blockchains that we have today?

Cassandra Heart: Philosophically, it depends on the network, obviously, because there's loads of different blockchains out there. But equilibrium's privacy forward approach is something that actually kind of harkens back to the original ethos around Bitcoin and the like. was this idea of cypherpunks coming together to say, like, we want uncensorable money. We want money that we're free to transact with anyone and giving a strong layer of privacy around it. There was this belief for, for a while in the early Bitcoin scene that Bitcoin was private. But the reality is, is that it is a public ledger. Those events are publicly traceable. And given enough metadata, you can, you can, you know, correlate those actions to identities. And so it's not as private as people thought it was. And that was kind of a mistake on Satoshi's part, because Satoshi had very specifically said, in order to retain this privacy level, you do need to use the addresses in a disposable way, like you just constantly cycle new addresses. And people just didn't do that.

Nicholas: The wallets, I guess, just didn't... Didn't support that, or didn't enforce that initially.

Cassandra Heart: Right, right. And the problem comes around to a very basic principle, which is the path of least resistance. If you make it easy to do things that are the right way to preserve privacy, then people will do it. It's just that Bitcoin did not make that easy. And so by consequence, enough people started doing just single addresses, a reuse, and ultimately it leads to the condition that, you know, the, there's this aspect called K, anonymity. that is like hiding in large sets. And because enough people were actually just willingly identifying themselves by reusing addresses, the people who weren't are now easily identified.

Nicholas: Got it. I've also heard you mention that there have been some inventions since the creation of Bitcoin that have kind of changed the score for what's possible, and that maybe open up new opportunities that don't follow the paradigm of a blockchain as we know it. What are some of those inventions and how do they change the shape of what you've come up with for Quilibrium?

Cassandra Heart: Yeah. So, I mean, like I've been kind of a passive observer giving hot takes in the industry to the general public, at least for quite a while. I was building most of this all in private for many years. Quilibrium has been a seven year mission at this point. But during that time, there was an observation I was making that the notion of a blockchain true to the original, I wouldn't call it necessarily the original definition because even Satoshi in the white paper didn't say, a blockchain is one word, specifically just referred to the structure as block space chain. And it was just an observational term, not an actual formal definition. Satoshi actually tried to argue it's a time chain, and that never took off.

Nicholas: What did they mean by time chain?

Cassandra Heart: So, I mean, all of this, the whole point of these decentralized systems is to reach consensus on a sequencing of data. And blockchain does that by virtue of essential. in the context of distributed systems, you sometimes have what's called a leader. And that leader is essentially the one responsible for producing the authoritative state of data for some period of time. And so Bitcoin does this through essentially randomized lottery. By having people essentially churn through hashes to meet some difficulty bar by treating it as a number and the smallest number wins, then you end up in this state where random nodes that are contributing to mining on the network are essentially just playing to be leader for that one block. And that works. But the problem is, is that it's very slow, very computationally expensive. And so as I was kind of like leading up towards, in terms of observations about the industry, is that a lot of chains don't even do this anymore. A lot of chains do things like proof of stake or delegated proof of stake or some other kind of thing. Solana has proof of history and proof of stake, but proof of history is actually just a hash chain that tries to provide that same kind of decentralized sequencing of events. But nevertheless, it's still singularly at our election. But one thing I was noticing about the evolution of all these various self-ascribed blockchains is that a lot of them are now starting to do things like having L2s or even L3s. And structurally, when we talk about blockchain as it is kind of formally understood to be, it is just a block with a bunch of transactions. And then it is pointing to the previous block, and it just continues on. For a chain at an infinite item. And that's great. But that structure specifically has been basically violated in every single stretch of measurement you can conceive of. Because now with Ethereum, for example, you have two different sections of data that you're maintaining under the blocks. And there's actually technically two blocks. You have the execution blocks and the execution layer. And then you have the consensus layer for proof of stake under the beacon chain. And that has its own kind of sequencing as well. So you already have two parallel tracks. That is no longer a blockchain. You might be able to argue it's technically two with one having a dependency on the other. But structurally, it doesn't actually look like a blockchain anymore. And then with the advent of using call data, later KZG proofs, as committed data stores for L2s, you now actually have a tree of blockchains that are fanning out potentially infinitely. Realistically, no, because obviously there's gas limits and a number of blobs that you can have or block. But essentially, you have this forest of blocks rather than a true blockchain. And so that's what I mean by the way that technology has evolved is that structurally, nothing looks like blockchain anymore except for Bitcoin. They're kind of like the old guard that keeps that particular structure alive in its truest form. And even they've kind of violated it a little bit over time. So when I say that equilibrium is not a blockchain, it's because it, in my opinion, isn't. But at the same time, you can look at the core data structure for how everything reaches consensus and coordination across the network. And that actually is a series of data frames that are linking historically backwards to one another. The way that that's produced, it's not proof of work. It's proof of meaningful work. It uses verifiable delay functions instead of hashes. The data frames themselves do not actually contain the overall contents of data of the network like a transaction-based history ledger would. So that's why I say equilibrium is not a blockchain. You could think of it in that same way, but very structurally, it looks nothing like the original blockchain.

Nicholas: So, okay. So we're going to get into all of those things in just a second. But I have heard you mention about proof of stake, that there are problems with this weak subjectivity paradigm. And I'm curious if you can provide any commentary on how that differs from Satoshi's vision or from. Maybe can you provide any kind of perspective on weak subjectivity and how Ethereum works today?

Cassandra Heart: So yeah, weak subjectivity was kind of like an element of discourse for a little bit, where it was like popular to talk about and then everybody forgot about it. But it's this kind of notion that the nature of the network maintaining consensus is essentially two different contracts and not in the smart contract context, but like an informal contract in society. And so you have at the protocol layer, you have consensus, three proof of stake. But there is the possibility that either a consensus layer faults can develop or an execution layer faults can develop. And so they've been trying really hard to encourage people to adopt alternative execution clients, alternative consensus clients, so that essentially Geth and Prism weren't the two big leaders of the network. So that that way there was some level of security against any sort of consensus faults that could cause very bad rippling effects that slash you know, significant portions of the network. the network. On top of that, there's also this notion of slashing. And so if somebody misbehaves intentionally, they would get slashed and they would lose the funds that they've put up for stake. That's kind of like the economic security layer of the network and how proof of stake is supposed to replicate the same kind of security layer as proof of work. We've never really put it to the full test like proof of work has been. When you look at proof of work, you have this kind of series of tests that have evolved over time. For example, forks of Bitcoin, like Bitcoin Cash, have been tested by people just buying absurd mining power to overwhelm the network so that they're able to rewrite history. And that level of force that can happen is purely protocol. There is no subjectivity around economic discourse. It's just strictly whoever has the most mining power wins. And as long as the network stays relatively stable and the value of that protocol continues to climb up, then proof of work in the traditional context can work very well. In proof of stake, we have a secondary defense layer that has been called weak subjectivity in the sense that if, for example, there is a major protocol fault, like two-thirds of the network is getting slashed because something just broke down, that would be very severely catastrophic to the economics of the network. Undoubtedly, nobody would be okay with that. And we're actually starting to reach certain pressure points where this is a valid concern. Many people are now starting to adopt liquid staking tokens, which means that they're delegating their expectation that you're not going to get slashed to that liquid staking tokens provider. And there is a dangerous consequence of that, that if you accumulate enough of the economic value in that and they do get slashed due to a protocol bug, because we'll assume that they'll be honest, then now you have completely screwed everything up. And so the second layer of subjectivity comes in where, essentially, the community would agree to halt the chain at a specific point in time, revert to that previous state before everything went haywire and proceed from there. Bitcoins had very few moments like that historically in the protocol. It was very, very early on where somebody was able to find a way to mint way more coins than they were supposed to have. And so they halted, moved back. And that was very, very early on in the development of the network. And they've never been able to reenter any sort of condition like that again. If there was ever a big hack, it's just too bad, so sad.

Nicholas: Longest chain wins.

Cassandra Heart: On Ethereum, say what?

Nicholas: Longest chain will be considered the authority, regardless of if there were a bug in a major client or something.

Cassandra Heart: Correct. And so for Ethereum, the very big incident, of course, that people probably don't even remember anymore, for the most part, because a lot of people are very new to crypto now, They weren't born yet.

Nicholas: Was the DAO hack. Yeah.

Cassandra Heart: Yeah. And you can argue whether or not that there was, even in that era, because it was proof of work, that there was no authority. Realistically, there was conversations that came out during the FTX lawsuit and a few times before where it was revealed, no, Vitalik was actually in a chat with several exchanges saying, stop trading. And so there's very clear evidence, even in the proof of work era of Ethereum, that there's a higher level force at play here. It's not just the protocol. There's also thought leaders that have enough sway that they can actually make those kinds of judgment calls. And so my take about subjectivity is that all of these things are problematic, that to have any sort of social layer of trust on a protocol is intrinsically broken. And the only way you can have a truly decentralized system that can preserve against any sort of societal pressure is it has to be strictly subject to the rules of the protocol. And if you ever have to violate that, or if you ever violate it in its finalized state, because obviously during development phases, you're going to run through a lot of ideas and iterate. But once it's in a finalized production state, if you ever violate that principle, that is not a decentralized protocol. And so spicy take, I'm sure a lot of ETH Maxis would be very upset with that point of view, but that is a reality, is that if there are people who can make a judgment call and that changes the state of the network, even if it's a decent number of people in some sort of foundation or community, that's still not enough. You need to actually have some formally bound protocol rule for making those kinds of changes, making those kinds of forks, if need be.

Nicholas: I suppose, even in Bitcoin, I mean, if the Bitcoin community were to decide to fork on the basis of some, I don't know, let's say there is a bug or something, the community could fork and has done in the past to a different chain with some different set of rules, right? So there is always this, I guess, zero or the social layer that has the opportunity to fork a chain.

Cassandra Heart: It's a little complicated. The difference there is that, you know, there's obviously no, there's no leader. Satoshi's gone. Satoshi's been gone since 2010. And so for Bitcoin, what it comes down to is literally just people who are part of the core development team. And technically there is no truly official core development team. There's just an informally recognized as official core development team. And from that group of people, they actually have a diverse set of opinions. There are people who are core developers like Luke Dash Jr., who's very adamant about fighting a, what he considers a bug in Bitcoin, which is ordinals. The fact that the carrier data size aspect of Bitcoin is being used as a bug, like that other return calls besides OP return that lets ordinals play on the network is actually just spam. Satoshi: And that it should be completely crushed. And so it's actually an interesting phenomenon because you, you do have people who disagree over the state of bugs on Bitcoin. And ultimately what wins out is whether or not the collective miners on the network agree to that. And so what these miners individually have to do to assert their opinions and influence is one, get more people as part of their mining pool. And two, have their mining pool exert certain rules that go beyond the scope of the protocol. Satoshi: So for example, his particular, Satoshi: his particular mining firm doesn't allow ordinals transactions to go through on blocks that they mine. And that kind of balance and interplay and open discourse and complete ability to disagree is something that doesn't quite work in Ethereum. You can't really get away with most of those things. You might get away with things like being able to prevent MEV, like for Flashbots special RPC, or having to comply with OFAC if you are an American company that is operating validators.

Nicholas: Right. So maybe we can come back to how this relates to production scale equilibrium in just a second. But first let's start with what is equilibrium from a more technical perspective? What is the architecture? Is it possible to describe in a high level way, what the architecture of equilibrium is?

Cassandra Heart: Yeah. I mean, if you want to adopt blockchain terminology, you could argue it as a chain, but realistically the structure of the network is basically taking the lessons learned from networks like Ethereum that have already gone through heavy utilization, pain points in certain things, and also that layering kind of shard basis that has emerged. Ethereum has specifically said that they've dropped sharding from the roadmap because L2s are good enough. So anyway, this kind of history of how these protocols have evolved has been very informative. Ethereum has some strong lessons learned. Solana has some strong lessons learned. And ultimately by consequence, we've been able to take those lessons learned and evolve past what those protocols are kind of stuck in, in terms of technical debt. One of those things is the emergence of using shards on the network, not shards in the MPC context. I've found that that has somehow created a new layer of confusion, but shards in the distributed context. So Ethereum has kind of dropped sharding from its roadmap. They have said that L2s are good enough. And that might be true for Ethereum's case, but there is this interesting economic reality that comes from that is that, these L2s all have to compete for the same block space instead of a block space that is designed to comprehensively scale out to the number of L2s that are sufficient to fulfill the network's obligations. And why that's kind of a problem for Ethereum is that they kind of like structurally have taken this idea of like, let's go into the network and let's just recursively run Ethereum through the layers. And so even the L2s are running essentially some fork of Geth for the most part. There are some L2s that are doing something special, but a lot of them are doing some fork of Geth. And from there, they are processing ETH transactions in the same way. Either they are using their own proof of stake method, or in the case of Optimism, it's just essentially proof of authority for now with some optimistic rollup that lands on the network. Maybe there are some L2s that are going to capitalize eventually, but there's no direct time frame about that. So there's this kind of notion that they're delegating authority to individual organizations to, in their own meaningful way, try to scale out the network. But it is essentially the same EVM, just at different layers in the network. And so you have to spend all of that same compute, spend all of that same resources for all of the same drawbacks and failings that have happened over time, because it was originally a 19-year-old college student's idea that kind of spiraled out of control into what it is today. And so by consequence, there's a lot of technical debt. And so they've adopted all that technical debt. And so you end up with this condition where you're seeing a lot of the same problems that Core ETH has, but all the way down into the L2s, on top of there being economic competition between L2s getting settlement.

Nicholas: Why is that a problem? Shouldn't it be better than having that block space themselves instead of having the roll-ups compete? It should be more efficient, right?

Cassandra Heart: Yeah, you could argue that's better, but at the end of the day, you're still limited. There's only six maximum blobs per block for the KZG proofs now. And for a brief period of time, we actually saw that hit serious contention because we had blob scriptions happen, and so there were people actively fighting for that blob inscription space against the people who were running L2s. And is a permissionless dynamic where there's a separate fee market associated with blobs. It makes sense when compared against call data. It makes no sense when you're trying to actually scale up the network to a large number of L2s. And so right now, things are working. But as more and more L2s move to blobs, as more and more L2s exist, we start to hit those intrinsic scaling issues. So

Nicholas: if I understand correctly, basically, you wouldn't design the internet to have a limited, I mean, I suppose, the IPv4/v6 transition is one example of this, but you wouldn't design a system that's intended to be the basis for all future internet communication with a fixed amount of data that it can contain per interval. Is that-?

Cassandra Heart: No, no, I do. It's just the difference is that how you achieve that is a much larger quantity. So in other words, to draw that comparison of IPv4/IPv6, Ethereum is essentially, essentially operating in this kind of context where it's almost like, I don't know what... This is actually where I'm showing my lack of knowledge about some aspects of internet history, but something prior to IPv4 in terms of how many total addresses. You're only allocated 255 addresses and that's it. And so by consequence, you do have a lot of competition for those 255 addresses, except in this case, it's actually only six. So that's kind of a situation to be in. And where I was coming from about it is that you can actually learn from that. The use of blobs is valuable for sharding, not just to provide a intrinsic settlement primitive that is cheaper and faster to validate than using call data. KCG is great for that, but you can actually take it a step further. You can actually use this proof process to scale out the number of individual shards that can exist on the network, and compact it very densely into what is actually replicated across the whole of the network. And so from Colibrium's perspective, we have essentially 256 individual slots, if you will, at the master level of the network, the thing that everyone must agree upon. And that's roughly 19 kilobytes of data that everyone has to keep synced every 10 seconds. But there's lots of blockchain-oriented protocols that are pushing way more data than that, that are pushing... Block times as fast or faster than that. And that's a reasonable amount of data to keep in check. But what's unique about that is that unlike how Ethereum does it, where they have 4096 elements for a single blob, we have those 256 elements. And we use that to collectively roll all of the proofs for all of the shards of the network up into it. And the nature in which we do this compressive rolling of proofs actually creates a shard space that is almost... I mean, it's not truly infinite, but it is for all intents and purposes of mankind. for the nearest millions of years, we will never encounter the end of that shard space. And so it's very similar to IPv6, where there's more practical addresses available than there are actual atoms in the universe. Very similarly for Colibrium, there's more shards possible than there are atoms in the universe. And so that gives us the capability of being able to scale out for that practical purpose infinitely. And Ethereum could have done the same thing if they had just used the blobs differently. They could have done the 4096 and they could have had a different compression proof approach. But at the end of the day, it's the same box of tools, but the way that we're using it is against the lessons learned from what they did wrong.

Nicholas: Mm-hmm. Mm-hmm. So is this the difference? Between universal consensus and universal coordination that you're describing, or is that something else?

Cassandra Heart: Yeah, actually, that is a very apt way to put it. We technically do have universal consensus to a certain degree. There is levels of replication lag that will exist between shards to some meaningful degree, the more and more relationships you have embedded across those. That's unavoidable. That's just a consequence of distributed networks. But as far as universal coordination goes, going to be able to do that. The network is constantly staying on the same heartbeat, that same 10-second interval for what is called the master clock. There's essentially two different layers in which the network coordinates. You have at the global level where everyone's keeping and maintaining that 256-element blob for global consensus and global coordination. And then within the actual individual shards, you also have consensus being maintained there.

Nicholas: Okay, I see. So we're starting to get into the more technical matters here. Would it make more sense to talk about proof of meaningful work first or VDFs?

Cassandra Heart: I mean, proof of meaningful work relies on VDFs, so it probably would make more sense to talk about that.

Nicholas: Yeah, okay. So what is a verifiable delay function, maybe for folks who aren't familiar?

Cassandra Heart: So a verifiable delay function was formally introduced because, I mean, people have kind of had various ideas about how do you cryptographically verify time since the late '70s. But much more recently introduced by Dan Bonet and a few others to formalize what it means to be a verifiable delay function. And it comes down to essentially two really important attributes, and the third is basically a characteristic. The first very important attribute is that it is guaranteed to be not parallelizable. Like, you cannot run any of these operations in parallel. They're inherently sequential. You can actually see a predecessor to this in Solana, for example, with their proof of history. They use what they call a hash chain. Like, it's literally just take a hash of a value, then take a hash of that, and then take a hash of that, and so on. You can verify that in parallel much more easily because you have all the values up front, and you can run them all in parallel. But you cannot actually produce that value initially in parallel. And so verifiable delay. respect that it is not possible to parallelize, but it is succinctly verifiable. And so that breaks Solana's implementation in the sense that Solana requires all those thousands and thousands of hashes to be communicated in order to prove that that chain of hashes actually equates to that final value. There's no way to readily compress that. You might be able to come up with some clever thing with zkStarks, but the time that it would take to do that is, again, inherently slow. And so you don't get the end result you want. You need something sufficiently, or sorry, efficiently verifiable and very succinct.

Nicholas: So VDFs are succinctly verifiable in contrast to Solana's proof. And that means that you can achieve this kind of parallel, or you don't require all of the data in order to make the verification, but you can do it in parallel or at least quickly.

Cassandra Heart: Yeah. So just to give a simple example, let's say that we have a number system, like, you know, just our integers. This isn't how it actually works, but, you know, let's say we have our integers. And you take a number and, like, say the number two, and you square it. You get four. You square it. You get 16. So on. You just keep repeatedly squaring that number ad infinitum. You do that t number of times. Now, it is very easy under the natural numbers to just simply prove, you know, like, what is t number of squarings? There's very efficient approaches to actually achieve that. But let's say that our number system is not efficient. Like, you cannot rapidly just decide two to the power of two to the power of t and calculate that very quickly. If you can't, then now you have something that is very easy to guarantee that it cannot be parallelized. And on top of it, you also guarantee that, from this number system, that you get some sort of succinctness. Like, that number. And the number system does let you at least easily verify, like, that you don't have to have that entire chain of numbers. You can just say, oh, yes, this digit is actually indeed the repeated squaring of that number. And this is actually what a lot of EDFs do. They are actually doing repeated squaring. Like, it's complicated math in terms of the special kinds of numbers that they're working in. in, but the actual underlying math of what you're trying to verify is just repeatedly squaring a number. And that's very obvious and very easily understood to most people, even with high school algebra. And so a verifiable delay function should do both of those things. And then there's this final aspect of the paper covered as a characteristic. What is important for some implementations that need a verifiable delay function is that the solution is guaranteed to be unique, that there isn't some way you can create another value or find another value that can convince the verifier that it is also an answer. And unfortunately, there was actually a recent paper that found that there is no way in the way that VDFs are currently being constructed that you can actually guarantee uniqueness. Now there's a certain degree of obviousness for cryptographers that, like, "Oh, well, that makes sense.". Of course, you couldn't because you're compressing things down into a smaller size value than the actual numbers that you're working in. And so per the pigeonhole principle, yes, of course, you're going to hit collisions. It's the same thing with hashes. You're going to hit hash collisions no matter how hard you try. It's just, of course. But for all practical purposes, you design your system so that it doesn't matter. Like if you find a colliding value, that's okay. And so ignoring that one basis that was originally part of the formula. That's a normal definition of VDFs. VDFs are a powerful primitive because now you have some sort of tool that is unique that can give you a degree of verifiability of the progress of things over time. And so we use VDFs very liberally throughout the entire application. We have VDFs that sequence events on the network globally so that we have, like, this consistent 10-second interval for heartbeats on the network. And then we also use VDFs in the proof of meaningful work side of things. Which is where we're essentially proving large amounts of data that are being retained onto and also the proofs of execution on that data.

Nicholas: Okay. So a VDF takes a minimum amount of time to compute, to execute. And others can verify that it was produced authentically, that there is some -- maybe could you give a little bit of clarity on what it means -- what the -- what can be proved about a VDF from an observer's perspective?

Cassandra Heart: Yeah. So a VDF essentially takes just a few specific parameters. The most important one that most people can understand is some value to represent time. What that actually means is iterations in this case. So you're saying, like, let's say that this takes 10,000 iterations and for most hardware that's roughly about 10 seconds. Now you have a way to say insert this random value, this challenge value, if you will, and then square it for 10 seconds' worth of time. You give me back a value. Maybe that value is some, like, combination of an output value and a proof, or maybe it's just a proof, whatever it is. You give it to me, and I'm able to plug it back into the verification step and confirm that, yes, this -- this is valid for that number of iterations that you were supposed to do this for. I now know that you have held onto this data for 10 seconds.

Nicholas: And that's not -- wouldn't that be dependent on the speed of the hardware? It is. Could you not -- It is. Okay. So it's not -- it's -- the time is not human time. But rather iteration.

Cassandra Heart: Yeah. Yeah. And so the way that we -- we try to address that is that essentially the network will perform certain behaviors based on the parameters of the hardware. So over time, you have this -- similar to how, like, Bitcoin, for example, has this recalibration of difficulty from the number of hashes that are being produced. You know, Quilibrium has the same kind of notion, except instead of just simply saying, like, fastest hardware wins, we just take a rough average of what the current number of iteration steps to produce 10 seconds of VDFs is and just cut it to the middle. So if you are a machine that is capable of producing these faster, great. Don't. It won't do you any good. There's -- there's no -- there's no sort of, like, insertion of proof basis onto anything deeper that comes from just being able to produce. This is just a sequencing mechanism.

Nicholas: So, like, until the median machine in the network has achieved its VDF -- Right.

Cassandra Heart: And so you -- you provide this. You provide this proof out to the network. And everyone is essentially hanging onto those proofs in order to sequence events around it. And any machine that is under that bar that is not, you know, able to produce a proof quickly enough to match that speed just simply doesn't. They will just listen and trust that the leading hardware -- well, not even trust, but they will listen and verify that that leading hardware's heartbeats is indeed accurate. And so it uses that information without having to produce that information and waste more CPU cycles. So you get a -- you get a degree of ecological benefit in the sense that the network isn't just perilously running, you know, endlessly just to -- to maintain consensus.

Nicholas: And this yields a kind of clock for the whole network, among other things.

Cassandra Heart: Yeah. It's -- it's essentially two -- two -- two different attributes that are really important. One, it is a clock. It is like a Lampert clock in the sense that you have, you know, events that are happening and events that follow these events. And you have some degree of time stamping from this that you can essentially guarantee. Like, it's kind of like the, you know, mob boss taking a photo with a newspaper to prove that they weren't somewhere for -- for some time as part of their alibi. It's -- it's the same kind of concept, is that you have, you know, these -- these particular proofs that are being generated rolling in data into those proofs in order to prove that this data existed for at least this period of time. And so for the master clock -- And we get a randomness beacon out of it because these numbers are inherently random. And so you can select leaders for different shards based on that pulse that comes out of the clock.

Nicholas: Got it. And you mentioned a Lampert clock. Maybe -- could you explain what that is?

Cassandra Heart: Yeah. So a Lampert clock is this kind of computer science concept that was originally designed by Leslie Lampert. It basically was this idea of how do you sequence events in globally distributed systems? And so you have basically a kind of, like, timeline. You have a timer counter that follows along with events that are being emitted in the system. So you have, like, some sort of sequencing basis that is, like, for every single step of the network, we are emitting a plus one to this monotonically increasing timer value. And so when we send these messages along, we have some degree of coordination that can assert the ordering of events around those events. Now, in a traditional distributed data center version of a Lampert clock, there is a -- there's an element of trust. Essentially, all actors of the system have to be honest in order for this to not go completely sideways. And so the use of a VDF provides that same kind of Lampert clock-esque sequencing, but it's cryptographically verifiable instead. We don't have to actually trust that people are giving us the right time. We actually can just use the time generated from this clock.

Nicholas: Very cool. Is that the first time -- is that something novel to Equilibrium, or is that borrowed from somewhere else?

Cassandra Heart: And so that's actually one of the most powerful things about this network, is that nothing we do is the same. These actually -- nothing we do. is that new. We're just doing things in new ways. The Chia network is probably the most popular network that uses this particular VDF that we're using, but there's others like Harmony. Now, there are some of these that have had some problems, not just implementation bugs. If you look at Chia Network's code, for example, they basically have like this -- they call their VDF engine Time Lord. Okay. I love Doctor Who as well, but okay. Anyway, what they do with it is when it runs and it produces proofs, if there's an invalid proof submitted, they just let it crash. They actually just let it crash. Like if you submit a proof that causes it to divide by zero and it crashes, it just crashes. It doesn't have any special checks or handlers. They just let it crash and restart it and keep going. That sucks. And so we don't do that. Harmony has an issue with their implementation of the same verifiable delay function that we use. It's the Veselovsky VDF. It's the same one Chia uses as well, where they didn't actually put in the time, like the iterations parameter, they didn't put it into the inputs of the proof generation. And so technically, you actually have many, many, many possibilities for generating things that will verify as a proof that are not actually proofs. And so we fixed that bug. But on the whole, we use the same thing. We don't do anything that is completely invented. The only thing we do that's inventive is the composition of all of these elements.

Nicholas: So the Lamper clock and the VDFs get us closer to proof of meaningful work.

Cassandra Heart: Correct. And so proof of meaningful work is just basically two things that are very important. We want to prove that data is being held onto, and we want to prove that that data was computed accurately. And so the way that the network operates in order for calculating anything on the network is either in an online environment. In an MPC context, or in an offline ZK context, and that offline ZK context is actually just using that same MPC with a technique called MPC in the head. So MPC in the head, just to outline what that is, is basically treating, like, as a prover, you are imagining all of these MPC participants and simulating them all, and then taking the execution trace of what all of those are doing, and that output value is your proof. And so... You're able to do things that you would normally use MPC on the network for, but if you need to be offline, like you're... Say you're, like, Coinbase, and you have cold custody, and you have a Faraday tent that you're doing all your operations in, you can't just connect to the network live and perform your transactions. So instead, you create this offline proof, follows the same rules as the actual online network, you submit your proof, you've made your transaction, good. So either of those two approaches ends up with output values that you can use to prove execution on the network. The data itself is just proof through the VDF. You have this, you know, random pulse clock for the network that is issuing a heartbeat, and you are a prover holding some set of data, and you want to prove that you've held onto that data for as long as it's existed. And so every heartbeat, you emit that proof by taking a random selection using the randomness beacon to roll through that data set to create a KZG proof, that KZG output proof value, very succinct. It's only 74 bytes. You take that 74 bytes and put that through your VDF step, and you just keep doing that over and over and over again. And you emit those 74 byte values with the VDF proof on those intervals, and that is your proof that you've held onto that data for a period of time. There's no way to cheat it because you have to emit new proofs that are constantly valid against new challenges on the data. So, it proves you have to have the data or else you couldn't generate the proof. And that's it. You just have proofs of execution, proofs of data combined, that's proof of meaningful work.

Nicholas: Okay. So, proofs of execution, proof of data. So, we have, I guess, we're able to store data provably and able to execute new computations atop that data or new inputs. Correct. New algorithms. So, we're kind of overlapping in terms of functionality. Things like Arweave and things like an Ethereum or another blockchain that allows for like a turn-complete blockchain. Correct?

Cassandra Heart: Correct.

Nicholas: Amazing. There's some other terminology that comes up in the documentation. How are Bloomfilters used in Quilibrium?

Cassandra Heart: So, Bloomfilters actually had two purposes originally. And this is a relic of the original design of the network. The white paper needs some updates. But the original use of Bloomfilters was... It was actually two places. The first was how messages distribute across the network. This gives you a guaranteed level of shards that are reaching this data on the network and being able to replicate that. The other aspect is that it provides a random distribution of who could possibly receive it. And this creates better dissemination of data on the network to guarantee replication isn't being constrained to a specific set of data centers because that creates an inherent risk that the data may disappear. And so, Bloomfilters are just really effective tools for probabilistically guaranteeing that a subset of individual nodes have that data. The other thing that we were using Bloomfilters for was actually part of proof of meaningful work. We were originally using a Bloomfilter selection basis to create Merkle proofs of the data. So instead of doing a KZG proof, we were actually using Merkle proofs to prove that you have those particular challenge sets of data. And you would just emit the longer Merkle proof with your VDF proof. But ultimately, that's a waste because we already have KZG proofs on the network. We could just use KZGs as a simple vector commitment scheme and choose the specific challenge value generator proof. And now it's only 74 bytes instead of log n times 32 bytes.

Nicholas: Got it. Okay. So are Bloomfilters in use anywhere now? Yes.

Cassandra Heart: Specifically in the messaging. And distribution of where data goes for shards.

Nicholas: Okay. I see. Verifiable computation is really important in Quilibrium. How does Quilibrium use verifiable computation to enable -- I guess maybe you did explain this in the proof of meaningful work answer. But I'm curious, how can you know that -- you mentioned, I think it's in the white paper, that you can be sure that your program works. That your program was executed in a trustless fashion, even when being executed by untrusted environments. I suppose the proof of meaningful work, applying the -- what you just described, I suppose, explains how you're able to be sure that the computation is being executed correctly, even if you aren't able to trust the node operators that are running the software.

Cassandra Heart: Yeah. So, I mean, we can take it many levels deeper in terms of, like, the actual cryptography underneath. In the offline context, the zero-knowledge proof gives you that same proof. In the online context, our MPC model is maliciously secure. And so there's lots of different -- there's lots of different MPC models out there. A lot of people operate under the semi-honest model, where there's some number of trusted executors. For example, like, say you have an MPC wallet. The trust basis is very frequently just, like, one of N. So, in other words, as long as one person is honest, then everything is okay. That works great for MPC wallets, because, I mean, otherwise, you're going to be dishonest with your own key. And what does that get you? Your full key value? Okay, cool. But we offer export. You've achieved nothing. So semi-honest works in a lot of cases. But in equilibrium, we actually do need malicious security. And so a way to kind of think about that is that when you have a verifiable computation interaction, but the data needs to remain private. Like, let's say --. Let's say I am sending encrypted data to someone, and I want to prove that they have received the data that is verifiably what it's supposed to be. So say that this data is private, so I need to use either verifiable encryption in the raw context or MPC, either way. I provide that input data into the private parameters of this execution. You could think of it like a black box. And the way that this works in a semi-honest model is... If someone cheats, they either learn that entire piece of data, or they learn some bits about the data, and with enough attacks, can reveal that data. In the malicious security model, you don't get that. You don't get any sort of, like, advantage that enables a malicious counterparty to reveal any bits of data. If it fails to execute, it just fails to execute, and there's no additional information that can be gleaned. And so what that gives us is something really powerful, because not only do we get security, --. -- in the MPC online context that is just as sufficiently secure as anything else, we also get the advantage that we also now can identify who is doing these particular malicious behaviors. So you can prove whether or not somebody was offline for a period of time when they were supposed to be online, or you can prove that somebody was trying to screw around with compute in a way that was trying to reveal data. And so that's a very powerful primitive, because now you can have a very easy way to, at the protocol level, survivor style.

Nicholas: I see. I see, wow. Can you explain, what is fully homomorphic encryption and what role it serves in equilibrium?

Cassandra Heart: We don't use fully homomorphic encryption. Yeah. Fully homomorphic encryption is a specialized version of homomorphic encryption that enables all types of operations on data in an encrypted context. So for example, you have a public key for eCDSA. You have a private key. that corresponds to that public key. Public keys are literally just points on that elliptic curve. And you can add points together. There is an entire additive system for, you know, adding points together. And when you are actually generating a public key, you're taking a public point that everyone knows that is called the generator and using your private key as a scalar value to multiply on that point. And multiplication is literally just a series of doubling and adding of that point in order to achieve your final value. Those operations on public points that allow you to do, you know, additive points together despite not knowing the private values, that is a form of homomorphic encryption. That is called additive homomorphism. And so fully homomorphic encryption lets you do any data operation on it. We do not use that because all of the current fully homomorphic cryptosystems that exist are slow, unusable users. They literally cannot perform at the scale that a network needs in order to sufficiently provide real value-added services. At best, you might have really tiny subsets that work efficiently enough for certain use cases, like private voting. But there's also approaches that can maintain the same privacy without having to do fully homomorphic encryption. So someday I hope it gets better. Right now, not so much. We just use MPC.

Nicholas: Okay, just MPC. MPC. What is a mixnet?

Cassandra Heart: So a mixnet is a technique for providing privacy at the analytic level. So when you have, for example, Tor. Tor is an onion routing-based network. Coolabrium is an onion routing-based network. The problem with onion routing is that it is relying on privacy through a technique called envelope encryption. And so what that means is I have, like, you know, say in my route to send a message to you, I have hop A, hop B, hop C. I will encrypt my message to you. Then I will encrypt that entire message to hop C, then that entire message to hop B, and then hop A. And then I send that message to hop A. All they see is that it goes to B. All B sees is that it goes to C. And then all C sees is that it goes to U. The problem with that is that that creates a degree of trust. In other words, you have to have a sufficient number of nodes that are behaving honestly on the network in order to actually provide you privacy. Because otherwise, if, you know, some government agency, like the U.S. government on Tor, operates, you know, the majority of all the hops in the network, then they actually can just immediately see, "Oh, I sent a message to U," and they were able to trace that entire route. So what a mixnet provides is a greater degree of privacy on the network to resolve the K anonymity problem that even Bitcoin had. So in order to hide data in sets, you need some degree of actual numbers of data. And so the way that we provide message routing in the network is that every node is participating in an MPC-oriented protocol where they are creating basically secret sharings of a permutation matrix. And if you're not familiar with what a permutation matrix is, it's essentially a matrix, you know, like the collection of numbers in a grid. And those collections of numbers in the grid is all zeros except ones that are uniquely placed in the row and column. So in other words, if I have a one in the top left of the matrix, there is nothing on that row or on that column that will be a one. The rest are zeros. So I create this entire collection of ones and zeros for this matrix. What's interesting is that if you treat your input series of messages as a, you know, a vector, just a string, a string that can be multiplied against that matrix, you end up with an output value that uses those ones and zeros to essentially scramble the order of the messages. Now, if I was just doing this myself, then I could lie. I could just, you know, straight diagonal of ones and the message list is preserved. But if you have the rest of the network also participating in creating secret sharings of a random permutation matrix of its own, when you recombine it, you end up with the multiple allocation of all those permutation matrices against each other. And so you end up with a truly random permutation matrix. But again, that means that all of the nodes on the network have to behave honestly, or at least one does. Where things get interesting is that in order to send a message on the network, you have to participate in that too. So if at least one sender of a message is being honest about their contribution to the random permutation matrix, then no one learns anything. Like the actual sort order of the messages is completely unknown and unlinkable to you. And the nice thing about that is, is that if everyone's dishonest, then no one learns anything anyway. So that gives you a true mix net over this onion routing style network that gives you the full degree of hop level privacy. So nobody watching the network can see anything going on. And then anybody who's actively trying to passively observe the network or even maliciously attack the network cannot see where your particular requests are being routed to. And so a mix net just provides that level of privacy that is guaranteed over whatever the bar of privacy is already set on the network.

Nicholas: Wow. And is that already in production in other projects? Is that borrowed from somewhere in particular?

Cassandra Heart: Yeah. So that's the funny thing, is that I picked up that random permutation matrix from some of the research that was being done on Ethereum. There's a group of people who are very similar to me. They don't like Mev. They think Mev is a disaster. And so they came up with an approach using the RPO, the RPM matrix, or random permutation matrix, so sorry, using RPM in order to basically provide a random sequencing of transactions for a given block so that Mev cannot work. And so, yeah, I ran some experiments with that, was able to immediately demonstrate that the secret sharing technique worked for matrices. Technically, RPM wasn't the first paper to do that either. It was actually a secure machine learning paper that generated the seed of the idea. And the nice thing is that, yeah, that gives us a mixnet for now, and it's really fast, and that's cool, but it also gives us one of those tools in the Lego box that we can later use for secure machine learning on the network.

Nicholas: Very cool. What's a hypergraph?

Cassandra Heart: A hypergraph. This is one of the funniest things because I've been a big advocate for language that makes it easier to understand what's going on and I mistakenly thought that hypergraph was going to be an easy term. And so, by consequence, now there's all these memes, like somebody with the guy holding a butterfly saying, "Is this a pigeon?" except instead it's a hieroglyphic and somebody in place the text says, "Is this an oblivious hypergraph?". That's unfortunately the consequence of my own decisions there. A hypergraph is literally just a graph that an edge can connect more than two vertexes. That's it. So, in other words, like, if you have a graph, which most people know what a graph looks like, it's just points and edges that connect the points. If your edge can connect more than two, it's a hypergraph. That's it. Hypergraphs are a powerful mathematical construct because hypergraphs can actually represent a large number of different types of data sets. So, for example, there is a thing that almost every single computer science student learns when they're in college that you can represent a graph using a graph. Like, you can easily represent the relationships of a graph just using keys and values. But that actually works the other way around, too. So you can use a graph to represent those keys and values. And so for hypergraphs, you get that same thing. You're able to do that same relationship. But there's also many other relationships that you can represent in hypergraphs. So things like a relational database, you know, like a just traditional SQL database, you can represent that as a hypergraph. If you want to do any sort of, like, you know, wide-column-level storage, you can represent that on the hypergraph as well. It's just an efficient datastore that, because of the nature of how that datastore is produced, you can leverage other techniques to give it the same kind of privacy that other oblivious datastore types can also provide. So it's just a best-of-both-worlds efficiency to serve many different purposes and do so privately.

Nicholas: Got it. So in review, how do hypergraphs, all of the--is there a way we could describe how all these components come together to create a substrate for private network communication, execution, and storage? Is there some way we can bundle this up so people can understand how these parts come together?

Cassandra Heart: Yeah, so one of the things that we're actually publishing in the next couple days is a builder's guide so that it's-- 'cause this has been kind of, like, one of the tragic ironies is that I've told people over and over again that in order to build on the network, you just write "go." And that's-- that has not--that has not been easy for people to grasp because they still, you know, they haven't grokked that it literally is that simple. You literally just deploy "go" code to the network, and it turns into garbled circuits so that it can execute on the network. Like, you don't have to think about any of that. But realistically, yeah, it's very straightforward. You treat the network like a big box of data that you can store code on. or you can store objects on through very simple "get" and "set." Or you can execute an application by-- by loading that application and evaluating it in that MPC context. But that is all completely, like, opaque to the end user. They just add data, they remove data, they change data, or they execute data. The same way that you think you operate with a, you know-- when you use your laptop and you open up a terminal and you type in a command on bash, what is actually happening behind the scenes? It is loading that section of data into memory. It is then executing that section of data, following whatever the execution context and ABI of your particular machine is. It's the same concept. It's just applied to a distributed network. And so from the user's perspective, it's very simple. You just--you know, when you use Ethereum and you open up your wallet and connect to a dApp and then you are presented with a whole bunch of transactions to sign?

Nicholas: Sure, yeah.

Cassandra Heart: In equilibrium, you're basically doing the same thing, except instead of having to deal with a wallet application, you use passkeys. Instead of having to even think about a series of iterative executions as different things, you have to be able to do the same thing over and over again. So if you're building against it, you just write "go.". If you are just dealing with the raw data store, you can use the oblivious hypergraph calls, or you can just literally set and load. Like, there's no--there's no complicated aspect to actually interacting with it. It's the underlying Lego blocks that are complicated in and of themselves.

Nicholas: Right, so diving more into the practical perspective from someone who wants to either be a user or wants to participate in the network. So you mentioned you use passkeys rather than a wallet in order to interact. Maybe can you explain a little bit what the experience would be like as a user? Okay, so passkeys, so something that's available in every operating system, every browser. Would I be connecting to an RPC, or how do I think about the relationship between the user and the network itself?

Cassandra Heart: Yeah, so, I mean, application development, there's a lot of developers that are wanting to, you know, integrate Quilibrium with whatever their web application is in the current state. Long-term, we're actually wanting to do TLS bridging, so the applications will live in their entirety, including the web server serving up HTTPS will live on the network. But until then, you would have an RPC bridge that that particular website would be embedding as part of a Quilibrium SDK. For the end-user experience, what they would be seeing is they would be prompted to use their passkey for authentication, and that authentication provides whatever authorized resources are allocated to that particular passkey. So if you have a, for example, all of this, all of Quilibrium was originally created to create a clone of Discord. It was a decentralized Discord clone. And so let's say that you wanted to log into this decentralized Discord clone. You would use passkeys to log in. That corresponds to your actual private key that can perform all the key management operations necessary to prove who you are and prove that you're authenticated to interact with any of the data that belongs to you on the network. But the difference is, is that unlike a wallet, which is one singular key that corresponds to all of your finances, and even if you're doing multiple wallets under a single seed phrase, it's actually just key derivations of that same original key. So if your original seed phrase is stolen, your entire wallet is forfeit. In this case, passkeys work on a per-domain basis. They're unique keys that are unique to the domains, and there is no way to correlate one key on one domain with a key on a different domain, unless you actually manually correlate the two.

Nicholas: So that would mean that if I was using two applications that were on Quilibrium, that I would have different keys, different passkeys?

Cassandra Heart: Correct, yes, and they're partitioned by your browser. The nice thing about that is that, unlike a wallet, where you have to manually do a lot of key operations yourself because you're constantly being presented with transactions, and then you also have to apply scrutiny to those transactions, because you don't know if that transaction is real or if it's going to drain everything, they're limited. They are limited to the domain in which they're operating within. So in other words, if I am using this Discord clone, the worst-case scenario I'm aware of is that I could potentially lose access to my account on this Discord clone, but I would not lose my funds.

Nicholas: I see, got it. But I guess, do you have a sense of-- if each application on each domain has a separate key, a separate public key, and, by extension, passkey/private key, is there a notion of having a shared identity across multiple applications, or rather I'm logging into each one separately?

Cassandra Heart: I mean, you would log into each one separately to prove your key for that particular application, but you do that anyway. So say, for example-- let's say I have, like, a-- say I have a Google account. Say I have, like, four different websites that I have signed in with Google. Of course, I don't do that, because i don't trust google for anything but like let's say i do that. um let's say i have like a figma account a canva account or whatever. um to do that there is a multi-legged authentication that happens. that's called open id connect and so when you go to that site like figma or whatever and you choose to log in it presents you with a bunch of login options. you choose google it takes you to google you authenticate through google to prove you are who you say you are and then you also do the consent step where you are allowing google to provide an authentication token back to figma so that figma can prove that you are who you say you are. so you've now linked those two identities together. that same type of open id connect style authentication step is exactly what people will use in the equilibrium design model. is that it makes it so much easier to just do things the way things have always been done because there's so much tooling that already integrates with things like json web tokens there's so much tooling that already integrates with pass keys and if you just simply conform to those standards instead of trying to invent your own one like sign in with ethereum does then you actually end up with a very very composable natural fit for a decentralized network to integrate with things the way they currently are and then also as time goes on it can start to absorb more and more of those standards into the body of the network itself.

Nicholas: if someone wants to participate in the network as a node operator do they run a node or hub type software? what do they actually do? and what does that software do?

Cassandra Heart: yeah so um if you want to participate in the network then you literally just clone the repository. you can. um you can either choose to run straight from source if you're a person that really wants to verify that the application is doing what it's doing. um our our node. uh software is licensed agpl. so if you want to fork it if you want to modify it if you want to try to pen test the network which there are definitely people trying um then by all means please do. uh the license lets you do whatever you'd like as long as you contribute code back. as far as actually running a node when you run a node you're just participating in the base layer protocol. so for now while we're gearing up to get to the finalized 2.0 release that means that you are running the core master clock. intrinsic. it means that at one point you are behaving in accordance to the mpc protocol. that was basically just perf testing the crap out of how fast we could actually evaluate mpc on the network and whether we'd hit any hiccups. and boy did we. and so there's through this stage we're just testing a lot of different components and making sure that they behave according to spec according to performance and then of course people get rewarded for that. um as we get towards uh 2.0 that includes all the other components in the network coming online. so the hypergraph itself coming online the mixnet coming online the onion routing coming online um all those things have to be rolled out in stages so that we can actually vet that the protocol is behaving like it's supposed to especially in the adverse conditions. you know there's that famous phrase that goes show me the incentive and i'll show you the outcome. uh well by consequence we've since this was an incentivized uh series of steps despite the network not being fully live yet we have of course been sybil attacked like crazy. we've had people trying all sorts of different ways to take the network down. we've been ddos we've been everything and so um we've. we've had to learn from that. we've had to adapt and find where things are going wrong and if any adjustments need to be made make those adjustments.

Nicholas: it's talking of scan scaling and facing uh the challenges of scaling. uh i noticed in documentation you talked about some inspiration from i don't know if i'm pronouncing this right but sila db sila db yeah yeah yeah um.

Cassandra Heart: so when you? um i don't know if you if you're familiar with it at all but basically it's a uh it's it's not a fork it's a re-implementation of the apache cassandra database. uh no relation and it's uh based in uh c plus plus. uh instead of uh apache which is apache which is built in java. um they have full compatibility with um the cql. uh query language for uh for apache cassandra but it's uh structured differently. so um in the traditional uh cql world you have this idea of sharding that is based on um. this this varying configuration levels of consensus. um sila db takes a much different route. they're they're much more pragmatic in fact. actually sila's started off as a like reinvention of an operating system and then they pivoted into building a database. um but the way that they designed it all is this kind of shared nothing approach. so for example when you think of traditional databases and you're sharding them out. you might actually have you know two different node instances that are that are contributing to the database's processing using the same underlying network attached storage whereas sila dv takes the approach that the data is fully not shared. like you might have replications of that data across different nodes but the nodes are individually maintaining their own share of that data. there is no sharing of resources in that context and so equilibrium does the same thing. the the bloom filter basis is something that adequately separates data out on the network so that it is covered for replication and also fully divvied out so that individual core shards like the individual nodes that are maintaining those core shards of the network are fully segregating that data data out. like they they're effectively forced to cryptographically. in order to provide that proof of meaningful work you have to actually have a unique key that corresponds to that data such that you provide a unique proof so that you can't just you know say i'm covering all three of these shards because i have the same data here. you actually have to prove that you are. in fact if you are covering three shards that all have the same data that you are encrypting them out into each of those three shards expected key formats?

Nicholas: okay wow um you mentioned in the incentives briefly a moment ago how does the quill token incentivize storage and computation? what are the kind of bottlenecks on performance or participation in the network that the token and the incentives aim to resolve? and how do they do that?

Cassandra Heart: yeah so the approach that we're taking is actually more of a free market. um in the sense that like you know ethereum has this notion of gas gas fees according to the various op codes that are being executed and according to the gas limit for a given for a given block on on the network. and that works under a blockchain oriented system. but we we scale out. we we don't have to worry about um individual nodes competing with one another for for execution precedence if they are the leader for that particular shark they're the ones who are executing at that time. and so um what that creates is a different kind of marketplace dynamic. if you have an open system where people are contributing more and more compute horsepower to the network then you don't actually want to you know consolidate things into a singular gas market. that makes no sense. so instead you end up with this dynamic where all the individual core shards are effectively um competing to outbid each other for what the actual execution price is and we expect for a very long time in this network that that price is actually going to be zero. um kind of similar to how like the gas fees for blob storage has been as close to zero as possible until the blob scriptions thing happened and that eventually petered out. so now it's back to almost zero again. um we expect that same dynamic to happen on q. so the biggest source of why people would be incentivized to use the network is that there are a limited number of tokens per proof of meaningful works. uh actual token tokenomics algorithm to it. and so the earlier you are to the network the more tokens that you'll earn for that duration. and from that duration of tokens that you've earned you know eventually it caps out around roughly 2 billion for this current generation. and so by consequence like the earlier you are the more tokens you get. and so it doesn't matter if you're not charging execution fees because you you're actually just competing to earn tokens and then once that actually does peter out over time from the decay value then you do end up with a actual marketplace where people are trying to like just outbid each other on execution fees.

Nicholas: okay i see um what what we've done a decent overview of some of the technology involved. but what are the capabilities that q unlocks if you're an application developer or a user? what are the okay? can we relate this to applications people are maybe familiar with or platform platforms as a service that people are familiar with? uh and maybe does it unlock new capabilities also.

Cassandra Heart: that are not things that we're familiar with yet yeah so um from from a business perspective obviously equilibrium is a protocol but equilibrium inc is a company. so we're not going into this with the expectation of just making a public good and that's it like. obviously as a company our goal is to also make money. and so uh what equilibrium inc is trying to do with the network is essentially use the technology of the network to deploy platform oriented services that can do exactly what you're asking, which is either leverage the network in a way that is cost efficient or leverage the network in a way that provides additional privacy or security. So what I can illustrate is our two primary products that Qink is going to launch with in conjunction with the 2.0 release of the network. So the first is basically following in Amazon's footsteps. The very first thing that Amazon provided, S3. Having an S3 compatible API, it's very easy. Like the network is just simply put and get, like you store things on the network, you retrieve things on the network. That is like the easiest thing to do. And so to generate an S3 style service where the data remains actually private to the end user that's actually storing the data, that's actually already a powerful primitive enough. That is something that Amazon can't even guarantee themselves. Like they have this weird combination where you can use KMS to store data, but at the end of the day, they also control the keys for KMS. So in order for you to actually have any sort of like raw security of the data that you put on S3, you have to encrypt it yourself. And instead, you don't have to really worry about that because it's very seamless to you through the request model of Q. So that's already like a value proposition that we provide. that is intrinsically more helpful than what AWS more helpful than what AWS currently does. But it's also something that we can compete on price around. So just from a perspective of I'm an application engineer, I need an S3 API for some reason, I might use Q because it's cheaper and more distributed. So then there's the other tooling that we want to launch with too. And that is essentially a KMS compatible API. So like I was saying, Amazon's key management service is essentially trust-based. You're trusting that Amazon's employment of their various hardware security modules is actually secure, that they aren't engaging in any agreements with state actors to provide that data to them and so on. Whereas with Calibrium, everything operates in an MVC context. And so you actually get some immediate value adds from it that you can't get from the trusted model. And one great example of this is for wallet-as-a-service oriented providers. A lot of them, they claim their MVC wallets, but in reality, they're not. Or at least, they're not really, really multi-party computation. They're multi-party key splitting. And so in order to operate, what they actually do is they spin up a Nitro enclave, or in other words, trusted execution environment. And then they will have the user as part of their SDK submit the encrypted key shard to that trusted execution environment where it is decrypted, combined with the other key shard that that provider holds. And then they perform signing using the recombined key. They flush that from memory, and then allegedly the key material was never recombined. Unfortunately, for a lot of those providers, they're wrong. The reality is, is that regulators don't see it that way. There are sophisticated regulators in the industry now, and including the NYDFS, I would know, because I was one of the people who built an MVC wallet at Coinbase, and they definitely understood what it meant to be custodial. And so if you do recombine the key material at any point, you are in possession of the key material, regardless of from the memory of the system. And so that makes you a custodian. And the NYDFS is very strict about that. And there are certain compliance obligations that you have to fulfill. And so what quilibrium offers is a huge value add for our KMS service is that it basically enables you to de-risk yourself instead of using a trusted execution environment. You can actually be using quilibrium as kind of like a Switzerland for data, where you have your user of your SDK sending the key shard to quilibrium, and then you have the internal key share that you've held and send that to quilibrium. And through the use of an MPC oriented application, because you just write go, you can actually encode an ec DSA signer, literally in the simplest Golang case where you have two inputs for your application user. you both of those, you just recombine. however you need to recombine it using the code to recombine it. And then you perform the ECDSA sign operation using that recombine private key. Code wise, that's very simple. Even somebody who has no familiarity with MPC can at least recognize like, that is that is very easy. That is very straightforward. I just recombine those values and sign. But the nice thing is that when this actually goes through the circuit garbler of quilibrium, this gives you the actual MPC execution of all of that code without having to actually understand how to turn it into an MPC oriented application. And so our KMS service just takes that one step higher. With a simple SDK, where you just have the user integrate one half of the SDK, you integrate one half the SDK, and you send those shards in to perform that sign operation. And so even the simple part is is made even simpler for developers.

Nicholas: Very cool. Do you think it could also be serve as a replacement for something like a private encrypted communications protocol, something like what Open Whisper systems does?

Cassandra Heart: Or? Yeah, that's, that's one of the things that we've. we've actually, like, again, because this all started to build a decentralized discord clone. One of the very first inventions that came about from all that was designing triple ratchet, which is a group wise communication, or group wise encryption protocol. And what that actually provides as part of our collection of Lego blocks elements is that you you actually do have that primitive for easy communication over the network, you can either use it in an ephemeral context. And so you're just sending data over the network to a bunch of peers that can identify themselves by, you know, by their rendezvous points, kind of like Tor hidden services. Or you can actually use the network itself as a asynchronous broker of that data. So if you are, for example, writing a signal equivalent, you treat Q as the signal server, and you provide that message inbox outbox approach through applications that you deploy to the network. And then everyone running this signal equivalent application on on their phones or computers or whatever, can connect just like they would be connecting to signals API's to retrieve their particular inboxes set of data and then perform all the decryption locally.

Nicholas: In the application where it's serving as an s3 equivalent, is there how do you ensure or is it something you would write into the code to ensure that there's sufficient duplication across the network such that if certain nodes stop, you know, are no longer live, that the data is not lost?

Cassandra Heart: Yeah, that's, that's actually part of the oblivious hypergraphs. underlying premise is that the way that we're going to be able to ensure that the data is replicated through the network is that it guarantees reconstructability. So I don't, I don't know if you're familiar with the approach that Ethereum was going to use for for their sharding, they were basically going to do like a complete 100% data re replication through the network using Okay, so in the world of RAID, like, you know, hard drive RAID, there is this thing called striping. And the striping uses what's called parity bits in order to reconstruct data, it's basically just extra bits that mathematically relate to the data that you're storing such that you're able to reconstruct it, they created a construction that uses a 50%. Like, essentially half the data is how much you're actually storing, the other half is all parity bits. That's, you know, 200% blow up of the actual data. that's that's quite expensive. And that's nice, though, because it guarantees that at the very least, the data is going to be sufficiently replicated through the way that they shard data through the network, or at least, tend to. So what equilibrium does is something very similar, except we take the actual raw, like RAID six approach with striping over that bloom filter basis of the network. And obviously, because, like I described earlier, there is this notion that if you have the data, and you want to prove it for each shard, you immediately have a an intrinsic benefit to do so to just like claim all three of those shards that they can land on, and then try to produce proof for it. That doesn't work, though, because you actually have to end up replicating that data in a specific way based on which shard it lands on, so that you're guaranteeing that you are, in fact, actually replicating that data out three times over. And so from an economic incentives value for the network, it actually is much cheaper incentives wise to actually just evenly distributed out. And so that gives us the ability to shard data out in a very simple way that encourages replication.

Nicholas: And these shards are supported by multiple nodes, such that if an individual node were to drop the shards, integrity is retained, and even the nodes that drop out would be replaced by new nodes coming online.

Cassandra Heart: That's correct. It's actually a replication rate that is quite extensive. It's not just the three individual bloom filter subsections that lead to that reconstruction. It's also within those individual subsections, there's many, many provers for that subsection of data. So there's roughly anywhere between, like, 128 plus nodes that are all collectively like reassembling that data at any time. So there's a significant bar that has to be met for data to be considered replicated throughout the network.

Nicholas: Got it. Got it. We touched on it a little bit before, but I wanted to just ask a little bit more clearly, how is quill issued? And what is the reward schedule that people can anticipate? Like, yeah, how can people think about quill exactly?

Cassandra Heart: Yeah, that's that's one of the kind of like tricky topics, because the nature of the network is something that is supply and demand oriented. So there is an incentive to draw more nodes to the network if the node is underutilized, or if the network is underutilized. But basically, this this comes down to a formula that is an inverse decay model. So the more that the network is producing data, the more nodes that are on the network, providing that replication of the data, the less the less tokens that are issued. And we expect that based on the growth patterns of the network, that that will roughly stabilize around to an issuance cap at about 2 billion quill tokens. The nature of how rewards work on the network is different. It's different than what a lot of other networks do. So Ethereum, for example, when you produce a block, you are supply. You are setting a reward value for that particular block producer. On Bitcoin, when you are producing a block, you set the coin based transaction to the miner or the mining pool that is producing that block. But on equilibrium, instead of actually having an economic discrete unit that you are writing into the network as a protocol native object, instead, you are collectively producing proofs over that data, that proof of meaning. For work proof. And from those proofs, you are able to interact with the quill token application that lives on the network as an application like any other application on the network. And with that, you have a mint function that you can provide that proof to, and it will mint out the tokens relative to the number of proofs that you provide for the data that's being proven over. And so you actually have two different varieties of experiences. You could either, as a node, have your node continually ping that mint function and continue to accumulate. Or if you want to be more efficient, you can do it in batches.

Nicholas: OK. We had a question from the audience. Uncle Devo on Farcaster wanted you to explain QConsol.

Cassandra Heart: QConsol. Oh, so QConsol has been a very, very difficult thing to get out there. Basically, there's a AWS console style equivalent that we're wanting to launch. We've had many hang-ups along the way in getting it out there because, you know, Qualibrium Inc. is me. And so trying to balance my time, because I also work part-time for Farcaster, trying to balance my time with actual releases of the network protocol, bug hunting and support and triage for the protocol, and then also trying to get the Q console fully available and out there, it's a challenge. It's a challenge. But Q console is basically just like AWS console but for Q resources. We have some integrations that make it really nice for things like Farcaster. For example, the Doomframe is actually a Q console application. It's just kind of a kludgy one at the moment because we need to get more things made more user-friendly.

Nicholas: Got it, I see. You mentioned Farcaster and your part-time work there. I'm curious, how do Qualibrium and Farcaster relate, or I suppose a direct comparison wouldn't really make a difference. But they are, from a distance at least, and from people who are more familiar with centralized options, they are sort of in the same part of the space. How do they compare to one another and how do they interact? How do you imagine them fitting together?

Cassandra Heart: Yeah, it's interesting because I, like, you know, working at Farcaster part-time, there are times that I've had some hard lessons learned from Q because we had that incentives model. We hit a lot of those growing pains way faster. People running hubs are generally, generally doing so because they have some sort of value that they get out of the data of Farcaster hubs, but they're not getting any sort of economic value out of it, unless you're not running it as a service. And big shout out to them for that because they made it really easy. But so from the Qualibrium experience, we had, you know, hundreds of thousands of nodes immediately, like, banging on the network, whereas with hubs, there's, like, a thousand. And that's, you know, a much smaller quantity, easier to reason about. whenever there's any sort of consensus faults that's easier to figure out. When you're running hundreds of thousands of nodes and everyone is trying to submit proofs all at the same time for this 10-second interval, you definitely find some bugs, and that's been helpful. But comparably, the protocols are totally different. We do use the same kind of, like, libP2P stack, like everyone is basically using these days because it handles a lot of the headaches for you, although it does also bring some of its own headaches. But it's the trade-off. So outside of that, the big thing that FarCaster does that is interesting in how it could interact with Qualibrium, it comes down to frames because you have a decentralized authority that is tracking signatures and validating those signatures, and you have a framework for validating those signatures that can be encoded also into Qualibrium in a way that you can do things like, you know, FarCaster frame support, like we do with the Q console. Or... there's also this notion that right now, FarCaster has a certain amount of data that it has to hold relative to the number of users, to the number of storage slots that they purchased. That has a limit. That has an intrinsic limit. Like, any time you add a new data type, you are basically telling the hub operators, "Hey, you now have to encode X number of extra bytes, possibly per user,". and that can get expensive. And so you always have to make that decision when you're adjusting the hub's protocol on how much data you're willing to store per user. Whereas with Qualibrium, the network itself is actually a distributed data store, and so there's a different economic model that comes into play. And so I foresee a future where there is a lot of interaction between the two, so that there are things that are very large data-consuming. Like, for right now, for example, if you post a picture to FarCaster, you're not posting a picture to FarCaster, you're posting a link. And that link has to live somewhere. And so, you know, the various FarCaster apps out there have their own services that they're plugging in, whether it's Cloudflare or IMG-UR or something in-house. Whereas with Qualibrium, you can just use the data store of Qualibrium. And so there's lots of opportunities for that kind of interaction. And one of the long-term goals that we want to do is make it so easy to run anything on the network that you can just run full virtual machines on the network as well, which means that at that point, you could run a hub on Q.

Nicholas: Oh, very cool, very cool. I guess I realize I didn't really ask this yet, but I suppose it's always the application that's going to be. who are the ones spending Quill in order to deploy or interact with the network. And typically users don't have to spend.

Cassandra Heart: Right. And the way that we... The entire model around what Qualibrium's intended to be is to be an alternative to platforms as a service. So if you're thinking of end users in the same way that you're thinking of end users of Ethereum, then you're already thinking of it the wrong way because Ethereum exposes too much to the end user. like that's the reason why crypto is so hard to get adoption on because they have to learn what the heck a wallet is. they have to learn what the heck it means when they're looking at a transaction request and whether or not it's even safe. that's that's too many wires exposed and it's a consequence of the design of the network. whereas with equilibrium the way that we design it is meant to be pragmatic such that you know things are very compartmentalized that applications are easy to build on top of it and that the end user experience the only thing that they will ever do that directly relates to with a raw component of the network is just use pass keys and they're already trained to do that. so it loses a lot of the complexity and a lot of the um. you know leaky abstractions that exist with other networks. um that makes it very easy for users to just use it like they've been using web apps for you know decades now.

Nicholas: i've heard uh some people like uh i think in for example the interview i did with pete horn uh about his fourth energy ideas uh and project that. uh maybe there's a mistake in the pricing of uh ethereum. um computation associating it with gas for execution of on-chain transactions leaves out the reality or ignores the reality which is that at scale the majority of the compute is actually like view function calls on rpcs and that's not priced in the economic model of the underlying network. maybe it doesn't relate to the security exactly but functioning as an rpc provider. consequence of that is just that it leads to everyone interacting with things like infura and alchemy who have their own ways of pricing external to the network. does that relate to equilibrium? if the end users are just signing transactions with or messages with pass keys nevertheless the node operators have to pay for the bandwidth in order to serve the data that they're storing in their shards. is that does that fit great to the economic model at all? or do you see something like a nanar or an infura popping up?

Cassandra Heart: uh no No, actually, that's, that's, that's an interesting question. And a lot of people actually do miss that one key insight. And I actually kind of regret not bringing it up earlier. So I'm glad you asked. One of the things that I considered a lesson learned from Ethereum, and this is something that others in the space have brought up, like Moxie, when he was talking about trying to trying to understand web three for what it really is, instead of like, you know, just thinking that crypto is this kind of like, shit heap. And so he pointed out very acutely that like, if the goal is decentralization, there are a lot of centralizing factors of the network. And the reason why Ethereum, for example, has resulted in indexers like Infura, or even entire protocols that relate to specific indexing use cases like the graph. The reason why these have all emerged is because the construction of Ethereum is something that does, first off, it prices reading in a in entirely the wrong way, either you run a node, or you are reading directly from the network as part of a contract call and paying a lot of gas for it. And that is broken. But also even the indexing of data is broken. The way that the way that data gets indexed on the network is not a natural fit, you actually have to end up per application coming up with your own way to index this data. And not only is that not developer friendly at all, it's also not user friendly at all, because they're unable to actually access any of that themselves unless they build an indexer themselves or use an indexer that is tuned for it. And that's really broken. And so what we do with quilibrium that's different in the design is that every single type of data that lands on the network has some sort of schema attached to it. And you can think of Ethereum as having its own kind of schema attached to it too, when you have a contract that is like you provide the source code to that contract. But you do also have this notion of things like events that end up in the event log, some Ethereum execution. These things are intrinsically bound to schema on queue. And so we have this mutual construction that relates the concept of schema and the data relationship of that schema to elements on the hypergraph. So by consequence, we are constantly providing data in a fully schema encoded way that relates onto the hypergraph in a way that is very efficient to query. So the cost of querying is lower, the ability to rely on the network as a self-indexing primitive is now possible. So you get the ability to relate to this data like you would a relational database, because again, like I said earlier, hypergraphs are able to be related to any kind of data store, including relational databases. And because of the nature of how we're indexing that data intrinsically through interacting with the network, you do get all the indexer benefits that you've had to rely on third parties to do for you for other networks. And so I'm glad you asked that question because yeah, that's actually a very powerful difference is that our economic model is meant to make that data not expensive to retrieve. If you are running, of course, a node of your own, yeah, of course, you could query it for free. You can set your own price for yourself and get it for free. But if you are just interacting with the network, it's still really cheap in order to actually execute those query calls. And you can do advanced query calls on it instead of having to pull all of the data down and perform a specific evaluation of that data set.

Nicholas: So do you imagine then that given all the efficiencies that it might just be more reasonable for someone, maybe a wide swath of node operators to run some kind of extension that allows them to price view function calls and sell them directly rather than having to have individual, like something like an alchemy instead that could be distributed across the network so that more node operators could be playing that role that NANAR and alchemy play?

Cassandra Heart: I mean, yes, but that's actually something that's just baked into the notion that they believe is a hypergraph itself. So it's, it's not something that I expect will evolve. It just intrinsically will be as a nature of 2.0.

Nicholas: And then, so who's paying for it if I, as an end user, am just sort of accessing some service that's available on Quilibrium?

Cassandra Heart: Yeah. If you're an end user and you're just directly interacting with the network, then the nature of your request is going to be intrinsically valued at Quill token value. Like whatever the, whatever the Quill token cost that that particular evaluator is pricing in for you, that was what you'd pay. If you are building a website that is exposing those raw queries, you probably would not want to do that because that would be a massive pain and you'd have a lagged series of like transactions that you actually have to process for the fees. So instead there probably would be from the web perspective until we have like, you know, straight HTTPS trunking that goes into writing TLS frames in an MPC context. And so we are there. There probably would be an intermediary that is pricing that in. Quilibrium Inc, for example, the way that we have everything, planned out for that is anything that's read oriented is largely going to be free. Like we just subsidize the cost outright.

Nicholas: So then the Quilibrium will operate nodes across every shard such that it's able to serve all the data and execute as well.

Cassandra Heart: We basically take the, so replication on the network runs in two different modes. You can either do greedy mode or you can do like economically greedy mode, or you can do data greedy mode, which is that you, you know, you cover the maximum amount of data because there are more shards than there are atoms in the universe. There is no way to adequately prove out all those shards individually, unless you had more CPU cores than you have atoms in the universe, which isn't even possible. So there's this notion of what's called multiproofs where you can declare large sections of cores that you wish to prove over and provide one proof for that collection of data. And so it becomes a lot more compact. You get way fewer rewards for doing that, but now you're proving over more data. And so we expect that there's going to be like this kind of a prover chicken where people will try to do the greedy approach, but they will find that it might not be economically valuable enough early on and they want to retain their precedence as a prover. So they'll do the less economically valuable form of proving and people will just be kind of in this like dance of when will they advance over to the economically greedy model. But Q will always remain in the, in the non-economic greedy model. We're going to remain into the data greedy model. We want to provide as much data as possible of the network so that people have an easier way to access it and build a, you know, new and interesting things from it.

Nicholas: Amazing. I guess rounding up, are there any areas that we didn't cover that you think are important to that seem like missing blank holes in our discussion so far? I'm sure there's a million things we could talk about, but is there anything obvious?

Cassandra Heart: It could go any which way. So no, there's nothing particular that comes to mind.

Nicholas: Well, I'm extremely impressed that you're able to build all of this on your own. I'm curious, how do you do it? How are you able to keep so much in your head and have you considered bringing on other people to help with the project?

Cassandra Heart: I mean, so a couple of different things there's there to answer. I've been in crypto for a very long time. I was a CPU miner in the early days of Bitcoin. I was on the Metsdao mailing list when the original white paper landed. No, I will never share the email address I used when I was on the Metsdao mailing list. Sorry. But yeah, I was an active, not thought leader at all, but an active participant in the discourse of what it meant to be a cypherpunk, what the future of that type of stuff would end up like. So seeing it all play out in the real time has been really cool, but it's also been something that because I've been in it so long, it's not that hard to start to relate some of these ideas together and think about things from a much more longer span time of view. In terms of how I keep that all in my head, one, like I said, been in it for a long time, but also just I ruthlessly compartmentalize my life. I run my life on Jira, my personal life. My spouse is disabled, and so I have to also do various things in service of that, and that means I really have to ruthlessly compartmentalize my life. And so by doing so, I live on a schedule. I live on Jira for task tracking everything. And so for me, when I have something that comes up or something I need to accomplish or some longer story of something that needs to be achieved that is smaller tasks, then I just, I put it in JIRA and I just don't betray that because otherwise I would lose track of everything. Finally, on the developer side of things and people like people contributing to the code base, I would love to hire people. I do actually have like one person that is working part time to help keep nodes online and help manage some of the overall key management functions that we have to do. But I don't have any full time employees that are, you know, strictly working on queue. And I'd love to do that. But investors have been a very tricky subject for queue. We've been very, we've not been shy to tell them how we feel about them. And so by consequence, we haven't always gotten a favorable response because we just simply don't do things in the way that. A lot of investors want, like they want tokens, they want liquidity, they want, they want us to betray all of the like core values of what queue is trying to do. And we just refuse to compromise. And so that has made it hard to get an investment, but we're getting closer. And from that point, we'll be able to hire people. In the meantime, there have been really like large outpouring of people who have actually been contributing to the open source repository. And we're actually going to be bringing a few on as dedicated maintainers this week because they've. You know, shown constant like care and compassion in like forward thinking about how to make the protocol stay secure.

Nicholas: That's great.

Cassandra Heart: So that's, that's basically where we are now.

Nicholas: Very exciting. And I suppose the company could eventually be a target for investment, maybe more easily than the protocol.

Cassandra Heart: Well, that's the thing is that I've been approaching investors from the perspective of investing in the company, not the protocol. The company is one of 17 individual signatories that that cuts releases of the protocol. There must be at least. 17 signers in order for a protocol release to be accepted. And so by consequence, we, we are not the only authority. if you were to give that title at all in the first place, uh, that that is managing releases the network. And so you wouldn't be investing in Q ink. if you're investing in the protocol, that's an entirely different context. If you believe that Q ink is going to make money in service of the work that they provide with the protocol, that's why you would invest in Q. And so we, we liken it to you would not invest in. Intel because you really believe in AWS. You would invest in Amazon because you really believe in AWS.

Nicholas: Right. Right. Interesting. Um, so it marks seven years now that you've been working on the project. You've worked on the project very consistently through all kinds of, uh, bull markets with crazy manias and metas.

Cassandra Heart: Um, I don't know if you have any reflections, what has, oh, I said, that's, what's helped me keep a level head about things is that, uh, I've, I've seen a lot of bulls, I've seen a lot of bears. And it just, you know, uh, you just keep building and ignore all of it.

Nicholas: Yeah. That's, that's really what I wanted to ask if you have any, uh, thoughts on or reflections on what it's been like to have such a consistent focus on a single project, uh, for such a long span of time without getting distracted by all the other, all the other sort of froth that's been going on, especially like on far and forecast, you know, socially, like both of us spend quite a bit of time on forecaster and as much as forecaster has this long-term vision for what it's trying to achieve in particular. Um, so I'm curious, yeah, just from your sort of longer time horizon, if you have any reflections on that.

Cassandra Heart: I mean, like I said before, I've, I've been here since practically the beginning. And, uh, for me, that's like, I, I'm not a big fan of short-term thinking in this industry. And so I've never, I've never really been so keen to participate in a lot of that. Sometimes I get dragged into it against my will, but I definitely never like intend to participate in, in it, in, in any sort of meaningful aggregate. Um, but like, there's a phrase that often gets used. I don't know where it originally started. I want to say it was like a Y Combinator-ism of some kind. that kind of like proliferated through all of their various children companies. Um, but it's this phrase that goes, it's never as good as it seems. It's never as bad as it seems. So like everyone would be, we're so back and it's so over. Like, depending on what their current, like bipolar state of the industry, uh, viewpoint is, it just doesn't mean anything to me because, uh, math is eternal and whether or not someone's opinion about an economic industry or the economic output of an industry is in some certain state doesn't change whether or not math will continue to work. And so I just stay focused more on that side of things and not so focused about whether or not. A particular narrative is now imploding.

Nicholas: I can't think of a better way to end this conversation. If people want to get involved, what are the most virtuous ways that they can do so?

Cassandra Heart: Uh, I like that question. Um, so yeah, if you want to get involved, uh, we have an open telegram community. We have, um, there, there are like actual, like community driven communities. I say that in particular because they engage in discourse that I legally need to stay away. Um, but as far as like the official Quill node runners telegram that is available, um, we did have it wide open originally, there were people who abused that. And so just reach out to me on Farcaster, happy to add you in. There's lots of people who are also there who are happy to add you in. It's just how we're mitigating some of that, uh, the, the problem noise. Um, we've also got obviously the repository on GitHub. We've got lots of documentation and lots of documentation that needs to be written. And, uh, yeah. Lots of work to be done. And so there's plenty of opportunities for people who just want to raise their hands and say, like, I want to help, um, lots of low hanging fruit to do that. And so, uh, we, we welcome contributions from, for practically everything.

Nicholas: Amazing. Uh, awesome. Cassie, thank you so much for coming on the show and telling us all about equilibrium. It's a fascinating project and it still feels, even though it's seven years and it still feels like very early days and an exciting time to get involved.

Cassandra Heart: Thanks for having me.

Nicholas: Yeah, absolutely. All right. Thank you. Thanks. Hey, thanks for listening to this episode of web three galaxy brain. to keep up with everything web three, follow me on Twitter at Nicholas with four leading ends. You can find links to the topics discussed on today's episode in the show notes. Podcast feed links are available@web3galaxybrain.com. web three galaxy brain airs live most Friday afternoons at 5:00 PM Eastern time, 2200 UTC on Twitter spaces. I look forward to seeing you there.

Cassandra Heart: Bye. Bye.

Show less
Cassandra Heart, Founder of Quilibrium