In The Clouds, A d/acc podcast

d/acc One Year Later: A Deep Dive into Vitalik's Vision

Hunter H & Sam G

Join us for our inaugural episode as we explore Vitalik Buterin's groundbreaking paper on defensive accelerationism (d/acc) and its implications for humanity's future. Hunter and Sam break down the three pillars of d/acc - democratic, decentralized, and differentially defensive acceleration - and discuss how these principles can help us navigate the rapid advancement of AI and superintelligence while preserving human agency.

Key topics covered:

  • The fundamental principles of d/acc and how it differs from e/acc

  • The challenges of maintaining human agency in an AI-driven world

  • Brain-computer interfaces and digital sovereignty

  • The role of crypto communities in advancing d/acc

  • Practical approaches to defensive technology development


Whether you're new to d/acc or deeply invested in these ideas, this episode provides crucial context for understanding one of the most important technological philosophical frameworks of our time.

Hey, I'm a Hunter this is Sam. Hey guys. And yeah, this is in the clouds. This is a d/acc podcast. Episode number one of the d/acc podcast. Episode number one of the first d/acc podcast, actually. I think we can say that. So yeah, think Sam and I were both pretty inspired by... Vitalik's original d/acc paper and then looping around on January 5th he released d/acc one year later which is sort of guess like a revisiting of the concepts and kind of an exploration of how his viewpoints have changed towards kind of the subject matter. Yeah, so basically this week's episode, we're just going to talk about and do a deep dive into this paper, the ideas, like behind d/acc If you are unfamiliar or if you haven't read the paper, this should be a way to get really familiar with d/acc and the values that. Not just like movement house, but also the values that we have. Totally. We have a startup we're working on called Tiny Cloud. And it's so aligned with d/acc act. And I think we just feel really aligned values wise. so we want to. Yeah, totally. In some way, making this podcast is like an opportunity to explore this whole space more deeply and to see how kind of the piece that we're building fits into it, which we think is actually really important. So yeah, maybe we should just start by jumping into what d/acc act is. Sure. More broadly. Yeah. So Vitalik talks about it in his paper, and I'll quote him. The goal of d/acc is to build a world where we preserve human agency, achieve both the negative freedom of avoiding active interference, whether it's from other people, active citizens, governments, or superintelligence, preventing that interference from our abilities to shape our own destiny and the positive freedom of ensuring that we have the knowledge and resources to shape our own destinies. So, And I might, I might just add that he, he's pretty, pretty focused on AI superintelligence. is kind of the main focus of, or the main focal point of his last two papers has been kind of AI super intelligence and how we navigate with regard to that. And he kind of sees this divvied up into three pillars. Well, so I guess it's also worth understanding like why it's called the d/acc We can talk about the pillars in a moment, but there is this thing called E/ACC which is Effective Accelerationism. It's kind of worth like understanding what this is to see where the d/acc fits in so EA or E/ACC There's EA effective altruism, which is like a movement early 2010s. It's all about being altruistic in an effective way Doing things that have like asymmetric upside for the cost like giving people like malaria nets To like help them avoid malaria. It's like much cheaper and the outcome is really good Yeah, E/ACC came up later and kind adopted a naming as effective accelerationism its basically a hyper focused version of accelerationism where there's a really strong belief that super intelligence and AI are going to solve all the problems in society. So it's going to solve like poverty is going to solve illness. It's going to solve everything. And so this is the belief of E/ACC and they're just like, yeah, we just need to accelerate anybody that doesn't want to accelerate that wants to things down. is like basically they're a doomer. want things to go, they kind of have this negative perspective of wanting to slow things down. And d/acc kind of comes in and basically is like, hey, maybe as we accelerate these technologies, we should do it in a way that's like focused on... Kind of maybe make sure it's defensive. So I think yeah, let's let's chat about the pillars of what makes d/acc d/acc Yeah, so just like a brief definition of d/acc is it's decentralized and democratic differential defensive acceleration So I think the main, the of the core pillars are that it's the type of acceleration that's protected from consolidation to a centralized authority, decided kind of together by the people where we're going. And yeah, it's differentially focused on the defensive part rather than the offensive part. And I think this is like really, actually just like a momentary hearkening back to Vitalik's original paper, My Techno Optimism. I think this is summarized pretty well in kind of one of these statements, which is the world over indexes on some directions of tech development while in under indexes on others. So we need active human intention to choose the directions that we want as the formula of maximized profit will not arrive at them automatically." Yeah. And I think that's kind of, yeah. Ultimately agree, think the formula of maximize profit kind of can shove some things to the side that are maybe really necessary. Like, I don't know, profit doesn't account for stuff like love and happiness. Like directly, it's like, oh, maybe if that's profitable, we'll do it. But I think there are values that we have, this is beings that we have to actually honor and hold. And I think I think a real value of d/acc is human agency that kind of comes up over and over again in the paper and it's that we as people despite the fact that like superintelligence is rising we should still have agency over our lives. We still be able to like have a vision for our future that we can do some things about and how do we preserve that and not to be I don't know maybe like enslaved by technology or like superintelligence. Yeah, and we need to be thinking about what that agency looks like now, like ahead of this exigence of like, you know, whether you call it super intelligence AI or AGI really like it's like by the time that arises, we'll be in some sort of foregone conclusion like Vitalik actually calls it he has a really I think a pretty a pretty clear description of this we don't want to end up in the trap of irreversible human disempowerment which is like a pretty bleak phrasing and I think really underscores kind of the importance of of really thinking critically about what human agency looks like in a world with AI, especially in a world with the type of AI that we fundamentally don't understand right now. Yeah. Yeah. And so I think one thing that I really liked about the talk in this paper is kind of talks about, again, with each of the pillars of d/acc what happens if there is not a pillar, if we're missing a pillar? So like, because it's kind of made up of these three ideas, decentralized, democratic, and differentially defensive. So I'm going to kind of scroll to what we talked about in the paper. basically, so if you have, if you're missing the defensive aspect, you end up with basically decentralized accelerationism. And that doesn't actually safeguard against catastrophic risk or tyranny. it's just, it's just EAC, but using, I don't know, decentralized technologies like blockchains, which, yeah, which that is what it is if you're not defensive. Yeah, specifically he talks about kind of the risk of one person kind of swooping in and Yeah, like saying like, hey, I'm like I'm here as a protector, right? Like I want to build this I want to make sure it is good for everyone and I'm gonna permanently establish myself as a top Yeah, which is I mean This is this is one approach that could well happen if there is if it's actually Yeah, just not differentially defensive, right? And then he talks about what happens if you're missing the decentralized or democratic aspect. You might, so you could get like a powerful authoritarian approach. And he gives us example of freedom tags. So I'm gonna, it's actually an example from Nick Bostrom. So I'm gonna just kind of quote it here. So you can imagine a world where everyone is fitted with a freedom tag. And this is like basically, So everyone's fitted with a freedom tag. A sequence of the more limited wearable surveillance device familiar today, such as that ankle tag used in several countries as a prison alternative. Encrypted video and audio is continuously uploaded and machine interpreted in real time. So this kind of, I guess, kind of... nightmare hypothetical scenario is that everyone is monitored and it is monitored using advanced AI tooling. It maybe looks kind of like China does today, where there's an advanced centralized. system that's kind of observing and tracking everyone's behavior in the name of safety in the name of preventing harm and so if It's not this centralized and democratic. This is the outcome you can get and maybe there are harms that are actually prevented But you also lose agency which we have today and it's probably not a good free. It's not a good feature It's not a feature I want Yeah, I mean, I think we'd probably both agree that's pretty And even though even the name freedom tag like makes me recoil. Yeah, exactly. Orwellian naming. Yeah, yeah. And so... It's like the panopticon, right? Yeah. Yeah, exactly. Always, yeah, exactly. What happens when you're always watched and... Right. Or you don't know whether or not you're being watched. Yeah, which is actually really effective. Right. It's effective making things safer and it's got some significant hindrances. Yeah. One other thing that Vitalik talked about on this note. It's like a version of centralized control that's usually overlooked, but it's still harmful, is resistance to public scrutiny. So if something is centrally controlled, the public can't really come up and say, hey, is this actually good? And so an example of this, which Vitalik talks about, is COVID and maybe the centralized reaction to COVID. So he talks about gain of function research being funded by multiple world governments, which is pretty well documented. He has a link to BBC article about this, including centralized epistemiology leading to the World Health Organization not acknowledging that COVID was airborne for many years, actually. And he also has links both to the World Health Organization tweets and then to Nature articles. And so, I mean, we obviously know COVID's airborne. There's like these things are kind of like more known now. But at the time when COVID was widely spreading, there was a very centralized response. And it was you couldn't you really couldn't critique it just like like like socially, even platform wise, like you were removed from platforms. So this is the risk of a centralized approach. And when we think about If there are risks like that happen with AI Do we want maybe mistakes happening from a centralized party? Do we trust like one like Agency or set of individuals? Just is it even is it the right decision for humanity? right decision for humanity as a whole right like this is philosophically like You know, I mean are there are there smarter people than me they can probably making some of these decisions Yeah, do I still want to have my say I think so. Yeah and even just like is it possible for a set of people to make decisions for humanity as a whole? I think obviously any set of people have set of values and live in some kind of echo chamber. Any time there's set of people, there's some kind of values that propagate. so having diversity in the sets of people that make decisions can actually lead to just different decisions being made and landing on a really good outcome. Totally. so, yeah, so I think, not just I think, but obviously Vitalik thinks the centralized approach is better. to address risks rather than the centralized verge. And the last piece of d/acc is what happens if you're missing the acceleration aspect and you're focused on decentralized, democratic, and defensive technologies, but they're not actually accelerating, you're not using new technologies, then you kind of drift into degrowth, which has its costs. So a lot of growth in society has led to a lot of like really good benefits in society and we'll pull up this graph of life expectancy that a talk has in here or links to and ultimately life expectancy over the last 20 over 20th century has grown Ultimately, life expectancy has grown over the last, like over the 20th century. And you could see in this graph that life expectancy grows also with technological adoption. So you could see certain countries have adopted, like you could see when they adopted vaccines, maybe when their food supply changed, all these things that are enabled by technology leads to just further life extension, right? Instead of dying when you're 50, now you're people are dying when they're 80 and that's like thirty years of life and grandkids and like things that we think are good right so the issue with like I guess deceleration or slowing down is that there are like huge costs yeah there's huge costs like significant costs yeah even like economically yeah right like if there's a slowing economy slowing growth people lose their jobs there's actual suffering that happens on a very individual level And so yeah, so acceleration is actually important, but it's really important that we do it like in a way that doesn't take away human agency because yeah, it's just a part of being human that we maybe take for granted, but technology can really shift. Totally. Yeah, so yeah, I guess moving kind of. I guess Vitalik kind of summarizes the goals of d/acc or as a movement or like maybe the principles. So I'll quote him. With d/acc, we want to be principled at a time where much of the world is becoming tribal and not just build whatever, rather we want to build specific things that make the world safer and better. With d/acc, also want to acknowledge that exponential technological progress means that the world is going to get very, very weird and that humanity's total footprint on the universe will only increase. Our ability to keep vulnerable animals, plants, and people out of harm's way must improve, but the only way out is forward. So we can't go backwards. Can't go back to the time, the golden time in the past where everything was perfect. And in d/acc we also want to build technology that keeps us safe without assuming that the good guys or good AIs are in charge. We do this by building tools that are naturally more effective when used to and to protect rather than when used to destroy. Yeah, like any thoughts there on that? Yeah, mean, think this thinking about in particular building technology that keeps us safe without assuming that good AIs are in charge. I think this is a really interesting concept. I know we were talking about the other day with, for example, kind of this metaphor that was presented in the start of the book, Superintelligence by Nick Bostrom, right? Yeah, mean, understanding we don't have a conceptual understanding of what the alignment of a super intelligence might look like. And it really depends on how it emerges. We don't really have the toolkit to understand. at this point, to raise the superintelligence. Raise the superintelligence. Yeah, sure. And I think it's worth kind of talking about this parable at beginning of superintelligence. It's called like the unfinished fable of the sparrows. And it's... It was really profound to me. read it when I was at OpenAI and it really shaped the way I saw the nature of the work that was happening. And so it's basically the story that starts with these sparrows and they're just, after a long day, they're just chilling. And one day, one sparrow gets the idea about how easy life would be if there was an owl that could help us do stuff, help build our nest. then the other sparrows were like, yeah, it could help us feed our young and take care our old and watch out for a neighborhood cat. And so... So the elder sparrow was like, all right, this is a great idea. So we should all go out and find an owl egg or like a crow egg or whatever looks like. Small weasel. exactly. Yeah, anything that looks really bigger and smarter than us, that can go and help. all the sparrows start to go out. And then one sparrow was like, wait, shouldn't we figure out how to tame and domesticate an owl before we bring an egg back? Another owl was like, or other Like that's really hard. I don't know. That sounds like a lot of work. We'll solve that problem later. Yeah, we'll solve it later. It's already a lot to go find an egg. And then the other sparrows like, that doesn't make any sense. We should figure this out first. But it was too late. All the sparrows started going out. And so basically a few, this sparrow and a couple other sparrows started to really think about what it would be to tame an owl. And they realized that it was actually really hard. And ultimately they just started working as hard as they could before they came back with an owl egg. Will it be enough? Yeah, yeah. It's an unfinished parable. so I think when we, it's, mean, then super intelligence kind of begins as a book and, and ultimately kind of highlights the risks and dangers with super intelligence. And ultimately, I would say maybe Sam Altman has recently announced that, we found an owl egg. He's like, hey, we found a path to super intelligence. I think it's possible. We know it's possible, And so it's kind of like, the time is really now to really think about, OK, what technologies do we need in order to protect the values that we have? For sure. One value, at least to us, is being like agency. We agency. And so, yeah, so, yeah, I probably. Thanks for entertaining my little tangent there. Yeah, no, no. This is definitely, as we're thinking about this, the big thing I keep coming back to is I think. There is this concept that even just before this, the Vitalik saying the world is going to get very weird. Like, I don't think that people really understand how weird. Yeah. And there is like, there's better ways to handle this and worse ways to handle this. And think that's kind of the whole kind of focus of d/acc is like, we're accelerating kind of along a razor's edge. Yeah. And there are good outcomes and bad outcomes. Yeah. But that we should be very discerning. defensive, you know, and try and take each step, even though we're doing it very quickly, as carefully as possible. Yeah. And just remembering, because this is something I once felt. that we can't go back. It's like, well, how do we stop this? It's like, we're on a train. It's a forward motion. No U-turns. And so I think, yeah, just kind of the principles that he outlined for d/acc. think the thing that's one of things that stick out to me is the idea of like, we can't assume that they're good guys or good AIs. that they're in charge and because like, I don't know, it's like an old saying, like power corrupts and like. Absolute power corrupts absolutely. Yeah, exactly. Yeah. Yeah. And I think like even, don't know, like I don't want to speculate on like stuff at OpenAI, but even like you could look at like the upheaval in like at the leadership in OpenAI as they get closer and closer to super intelligence, like versus the altruistic nature of its origins. And the such good well intention. I would bet on them as like, these are the good guys. And maybe it's like, cool, no, the good guys are anthropic. But maybe they go further along and actually maybe they're not the good guys. Or not even that it is good guys or bad guys, but ultimately we're in people with varying incentives. And it... Maybe we shouldn't kind of play this game that we've played throughout history of like giving people all the power and instead it's worth having mechanisms that allow for coordination, right? Like we have something like the UN, it's not just a single power. And even like another example of the good guys being in charge is like the United States. As someone that lives in the United States and I do love America and its values, America has generally been the good guy in the world, right? From the American perspective. But there many countries that don't see America as a good guy, right? If you look at even like the coups that the CIA has done, well documented and overthrown governments and kind of destabilized places. It's like, is that an action the good guy would take? But maybe it was in the interest of the good guy, but is it in the interest of everyone? And so you can kind of maybe see something similar if we follow the same playbook of like the good guys should be in charge. And that just isn't actually good outcome globally. And you should imagine what it looks like if you're not like on the side of the good guys. What if the good guys is like the Chinese approach, China, right? it's like, actually you all need to have less or something, right? crazy. Yeah, exactly. And it's like, wait, hold on. Like the definition of that is, or just the idea of like, we want to put a cohort of people in charge. Right. It also kind of, this to me kind of begs the question of like, does a group of, what does an adequate group of people participating look like. And I think this is one of the really fascinating things I'm encountering as I'm delving into other texts as well. So reading this paper inspired me to go back and start reading this paper by Dario Amodei called Machines of Love and Grace. And it just made me realize how many parallels there are. kind of between this, the ideas Vitalik's presenting, which are maybe a little more crypto aligned and the ideas that Dario Amodei's presenting. And really, I think one of the interesting things that they both call for is a broad cohort or coalition of people across disciplines to be interacting. And I think that is just, this is just something fascinating in relationship to this question that I think about is like, what does a sufficient group of people kind of leading us forward look like? how are they represented? what's interesting is that what's fascinating about this paper is I think it's taken me down the rabbit hole of... of recognizing that this is not the first idea, the very first time a lot of these concepts have been presented. Maybe in this culmination, or this gathering of these concepts, in this framing of them as, you know, d/acc it is, you know, more... there is more of a culmination of understandings. But to see many of these super intelligent guys also calling for like, need broad coalition of cross-disciplinary intelligent people. And even when you, in Machines of Love and Grace, you start to read about how Dario Amodei... or Amodei like kind of conceptualizes the idea of AGI. He talks about it in terms of, first he goes to describe the traits of what the super intelligence looks like, how intelligent it is. And then he goes through this whole outline where you're like, wow, this one, you think about this super intelligence, you're like, it is so capable in so many different domains, right? What he's describing as AGI is you really start to get the picture of it. And then he goes on to say, And it will be able to clone millions of instances of itself. And then it's like, then you start to see the picture of where this is headed. So all this to say, I think it's very fascinating that these people are, you can kind of say in some level are contemporaries, or at least similar intellects in the space are really examining these concepts from a sort of similar position standpoint. Maybe in another podcast we can go in further about that. I think it's very connected. Machines of Love and Grace, I haven't read it. I'll give it a read. actually sounds like... It'd be a good one. Sounds cool. But I think we were just right at human agency, really, there. We were right at the human agency part. Yeah. And so the talk then kind of goes and talks about this kind of third dimension. And so basically... He kind of illustrates two dimensions. And so basically, I talked about focusing on the challenge of superintelligence. He proposed a path where we can have superintelligence without disempowerment. One of them is making sure that building AI as a tool, so instead of AI as like the super autonomous agents. So instead of having a world where you're interacting with a bunch of AI. like actors, are basically using the AI as a kind of this mecha suit. You're kind of amplifying your own capability. And so he to quote Vitalik over time, we want to proceed towards an eventual end game where the super intelligence is tightly coupled. It's a tiny couple combination of machines and us. So this is like one like one kind of vision. And then the other other vision is like we're talking about information defense, basically to build like defensive social technology that helps community maintain cohesion and have high quality discourse and a face of attackers, so that people can also make high quality judgments. And so he talks about some things that are the beginnings of this, like prediction markets. And so his main point that he makes here is that we shouldn't just focus on these survival aspects. Basically, I'll put the talk again, there's a consistent pattern across all domains that the science, ideas and tools that can help us survive in one domain are closely related to the science, ideas and tools that help us thrive. And he some examples, including like anti-COVID research, right? It helps us survive COVID. actually is really helpful in understanding viral persistence and how it relates to stuff like Alzheimer's, which is really a wicked disease that affects people, half the people who over the age of 80. And as we look at longevity, if most people are going to over 80, you kind of want to be able to remember what's happened and stuff like... social tools like community notes, things like these cryptographic tools like zero knowledge proofs, fully homomorphic encryption, increasing privacy. So he kind of talks about these things that not just help us survive, but help us thrive. he ends up, there's like a subject area that he ends up really talking about that I think is really, for us is that we've talked about quite a bit, which is brain computer interfaces. Yeah, and I'll just I'll harken back for just a second to kind of the first the first part of the survive and thrive argument here Where he's talking about today build AI as tools the second step is is building a tightly coupled use it says it's today build AI as tools rather than as highly autonomous agents then Tomorrow, use tools like virtual reality, myoelectrics, and brain computer interfaces to create tighter and tighter feedback between AI and humans. Then over time, we proceed towards an eventual end game where super intelligence is tightly coupled combination of machines and us. What I think is really interesting here is the brain computer interfaces and this concept that was new to me of myoelectrics, which are these computing chips that actually derive their power from our musculature. This is a really fascinating concept. I hadn't thought of about this before. It is. It really is an empowerment of us, like, kind of has this element of like, even like almost like physical sovereignty. Right. even the chips that I'm... doesn't need a battery. It's powered It's powered by me. Right. It's part of you, essentially. The food that I eat is powered in technology. Right. Which is actually a really interesting technology I use, not eat. Yeah. It's a really interesting... like a really interesting concept. And I think there's really just the nature of like being. being more more like more and of a cyborg. Yeah. And in that, cause I make the argument that we're already cyborgs in that we have our devices that live outside of us, like our phones, but without a phone, we are actually quite crippled in society, in a modern society. I live in New York city and navigating without a phone is actually much more challenging. Right. I mean, if you, when you look at the distribution of phones at this point, you know, everyone has, there are almost no one doesn't have a Yeah, all You might not know where your next meal is coming from. You might not have a house, but you've got a cell phone. Yeah. I do think that that is a pretty compelling argument for how interwoven they are into society, but the human psyche at this point, it's really... You could argue that, you know, your cell phone is sort of an extension. It's an extension of yourself in some way. Not that we necessarily like this, right? It has limiting factors on our agency sometimes, but also these enablements that are really powerful. Yeah. And so when we look at something like, like my electrics or brain computer interface, there's like this invasiveness to our physical form that is like, I think that can be scary. And I think part of the concern there is like, like, it's almost like, wait, do I have to sign up for some SaaS software in order to like to run? And it's kind of like, we actually maybe should have a way to be like, to have agency and sovereignty. Yeah, without having to like. Yeah, like if I don't pay my BCI subscription, do I like no longer have? Do you get ads now? Yeah. does it run ads? Exactly. your consciousness, right? This is like the snow crash dystopia where the guy's getting ads run via his brain computer interface, right? I think this is really interesting because in this section where it starts, BCI is very relevant. as an info defense and collaboration technology, right? And he goes on to talk about- Could you quote the whole thing? Sure. Yeah. Yeah. So BCI is very relevant as an info defense and collaboration technology because it could enable much more detailed communication of our thoughts and intentions. BCI is not just bot to consciousness. It can also be consciousness to bot to consciousness. So this- This concept here, I think, is really like what you just said, like do I got to subscribe to some SaaS? Yeah, sure. Right? It's like, ideally, is consciousness to bot to consciousness in a sovereign way. Right? Versus consciousness to enterprise SaaS model to bot to enterprise SaaS model to consciousness. Yeah. It's like, where do we draw the line at interlocutors? Right? Because we haven't done it with ourselves. Yeah. Really? I mean. When I think about where my data exists on my phone, it's distributed between, it's gotta be thousands of applications that have access to some set of my personal data. Probably more than I'm aware of. Frankly, but. Like, do we need intermediaries? In this day and age, maybe we don't. Yeah, I think there is a point. It reminds me when I was like, when I was really young, I used to be really interested in personal data networks. It was like, I guess what I called them. And I was like, maybe like 18, 19. I had a Pebble watch. It was a Pebble smartwatch. And I thought it was so cool that I can get notifications from my phone via Bluetooth. I had like another Bluetooth device and I was imagining this network of like being able to talk to all my Bluetooth devices as I like moved to the world and then they were just talking to each other. And at the time I was like, well, you kind of do need some kind of server like elsewhere just because it's like, I don't know, was like pretty young and like even technical career. So I wasn't really thinking about ways of doing this like in a sovereign way. But it made me think about like, I think about that now. And if you look at stuff like Myoelectrics and Brain Computer Interface, you kind of want to be able to use such technology offline. You want to be able to use it privately. If I have a thought, if I want to send someone, if I want to send you a thought, I don't want to like... have it like, you're thinking this bad thought. I'm asking you, maybe the thought is like, you can imagine something that would be like illegal in China. it's not illegal. Yeah, exactly. You're thinking this bad thing. Right. It's like, yeah. And so it really then matters, like the ability to be sovereign, the ability to be independent. This is interesting. And not to be like, like, yeah, I guess just not to be crippled. Like for example, even criminals can use cell phones. Right. Do we take their cell phones? Yeah. Yeah. Exactly. We don't build them in a way so that it's like, oh, this person's breaking the law. Let's disable it. And you can consider, there's a trade off that happens, Maybe if you want, you want a safer society and you want freedoms for people, you don't want people to able to impinge on other people's freedoms. But there is this like kind of- What's yours? Yeah. Yeah. Exactly. What constitutes part of you? Yeah. And can we cross this bridge to have superintelligence in our world without giving up this fundamental piece of being that is agency? And on some level, what he's pointing to here is a post-verbal society. He's pointing to this idea that it's interesting. And one of the other things that Dario Amodei also has to say about this kind of framework, right? is that often these concepts get diluted by looking at them from a sci-fi perspective because then they come with all the sci-fi baggage. When actually, what we're seeing is that there are in fact people using brain computer interfaces right now. Not many people. Yeah, but there's the guy who has Neuralink that's a fully paraplegic and he's playing video games. So it's not, it's not, this isn't some far-fetched like, oh, well we'll consider that thing when the time comes. Like this is, this is now is the time to consider it, right? Like if we're, if we need, if we, if we need sovereignty in this space or a way to, to, have our thoughts stored privately, the time to consider it isn't after the technology is implemented. It's much like this owl. metaphor before or when you're considering, you know, it's like there are number of lines in the sand that we need to be drawing kind of right now. Yeah. Yeah. Yeah. Yeah. think this type of sovereignty is really important. Yeah. And even like right underneath the BCI statement, I'll quote and talk here. lot of biotech depends on info sharing. In many contexts, people will only be comfortable sharing information if they're confident it'll be used for one application and one application only. This depends on privacy technologies. like that is like zero knowledge proofs, fully homomorphic encryption, obfuscation, et cetera. And so just that ability to have like privacy and like, it's more than privacy. It's really the ability to like trust an application to say, want to, I'm trusting the application. I want to check my DNA for some disease or something, right? Maybe I want a medicine made specially for me because I'm suffering from something and I'm using it to my DNA. Is my DNA gonna be then bought and sold and then things are gonna be targeted towards me genetically in the future? This is possible. Absolutely. It's actually like... it's really not that much of a stretch. No, and I think the advances in biotech, come up, like advances in biotech, and just like, again, with like super intelligence, like this is not like, it's not a stretch. Like these are all physically possible things. It's really just a execution implementation and like research to execute them. And so when we think about like, to use such applications, like there needs to be trust. And ultimately, if we look at like, yeah, I think if we look at how like corporations like use data, the business models that they have today, where it is like, it's like a bit reductionist, but it's like manipulation for profit. right. Like stay staring at this screen for a little bit longer, so can show you one more ad. And like, how do we get you to continue to stay here? like I think that model is kind of fun. There's a fundamental adversarial nature to it. There's a kind of distrust. Ultimately, I have a distrust of an adversary, right? Like someone's like, they're playing a different game. I want to send you a message on Instagram and instead Instagram's goal is actually I want Sam to stay here for as long as possible. And it actually succeeds a lot of the time. And so it's like, oh yeah, I want to send you a message for like 30 seconds and then maybe on Instagram for five minutes. Right. I'm just like, well, you're on some partner website, right? You've taken some sure. You know, looking at like a knife. Right. Where am I? How did I get here? Sure. Right. I was trying to send a message and now I'm like shopping. Yeah. And I think there are better ways to to do like so much of how we even like like like, don't know, I was going to say like trade and do like like buying and selling stuff. Right. Like I like, for example, a knife example, I want to buy a knife. Like maybe I want a really good one, right? I want to share that with like whoever is providing knives, right? Give me what you got. And then I want to look at like the best options and I want to pick one. There are ways to do this with something like Private Set Intersection where I can say, hey, this is what I'm looking for. You can see that this is what you have to offer. like ZK proofs. Yeah. So yeah. And to share like, like these are the traits I want and stuff like that. So there's just like... This is the party I'm willing to share it with. Yeah. If you can fulfill these criteria, I'll share this information with you. Exactly. But if not, this is not relevant. This is not relevant information. We don't need to exchange it. In fact, I really value my privacy. Yeah. And so, yeah, I think they're just totally different ways of kind of like having like technology and building technology. And it's just... And the thing is, the reason this matters now is because... we are going to, super intelligence is going to almost like grow along the scaffolding of society that we have. Right. And so if we, if our scaffolding society is that we centralize everything and that it's path of least resistance. Yeah. Yeah. It's like, yeah, okay. I'm the super intelligent entity. I have access to all the data. If I don't have access to the data, that's a problem. Right. I should optimize for that while accomplishing my goals. Right. Which can lead to a path of less agency for Totally. Yeah. It's not just, it's really not just AI enablement, right? Like AI enablement, great. I think I want that. I you probably want, I think we're both interested in being enabled by AI. think a lot of people are interested in being enabled by AI, I think that like fundamentally what this is pointing to is, is that having, having privacy or having sovereignty, like the other thing that it accomplishes is really, preserving your agency. Right? I mean, I'm very aware at this point that there are people and it's their job to grab my attention within the first 10 minutes of the day. Before I've even really rubbed the sleep out of my eyes, I'm getting advertisements and they're highly targeted and contextualized using algorithms and dopamine feedback loops to try and take me further down the rabbit hole of just what you described on Instagram. And this is, I think, poised to get a lot worse. Like what I'm seeing, like I've become, I think, aware and I think as many people have to some degree, of how destructive it can be to have my attention sort of hijacked this way. And to think about what that might look like when instead of in the hands of a product manager, it's like a AGI or a super intelligent AI controlling how information is disseminated to me based on my interests. Like this poses huge problems. So on some level it seems like building some sufficient sovereign data store of some sort is like, is sort of both, it's sort of dealing with both the fronts in a way. It is. Yeah, it is. I also think just maybe like one last point and we should continue on paper. There's many, many interesting ideas. here. Is the, like we talked about the BCI, we talked about kind of like the stuff. And this is this it feels far out, you know I think like like it's closer than we then we really can see but it does also feel far out but one thing that's here today are AI pendants, right? Which kind of are really shaped like really similar to somebody like a BCI in terms of the closeness of Data if I have everything I've said over the last year and everything I've heard like that's really sensitive Maybe I don't want that. I don't want, I really care about what that does. Like your conversation. Yeah. From conversations. Everything I'm saying. You get like my, my moods, my emotions, um, like the people who are important to me, my to do's, um, like just generally you can really analyze me and that's actually, I find that really useful. I want this. But if I'm wearing and we've, we've worn it for like two months, like last summer in New York. Right? And there's this trust thing that comes up where can someone trust me if I'm wearing one of these? Maybe they trust me. I'm like, no, no, no. Trust me. I can keep this secret. Like, I can delete it. Whatever. If it's a networked device, if it's not sovereign, maybe you trust me, but you don't trust me. How far does this networking extend? That is a problem. Yeah. And I think when we look at even stuff like like let's say like Edward Snowden talking about like the surveillance systems in the US, like in the early 2010s, Like, I mean, things have evolved since then. Where they basically captured all this user data from all these platforms that were centralized. Like, what does that look like? When it's like, because you're wearing such a device. Like where it just gets scooped in. Yeah, every everything. It's a you're just you're just basically being surveilled by unknown parties at length. Right. How do you actually have sovereignty? does this extent and so if we can't solve it for something like an AI pendant, can we like what about how are we going to solve it for BCI? Yeah, exactly. Yeah. So that's where are today. And so it's it's it's not that these things matter like in some distant future. is that they're present today. And ultimately, if you don't want to, if you want some kind of sovereignty and agency or not to be monitored or to be able to trust people, you kind of to make this trade off of like, actually didn't, I have to kind of get away from technology. And then there's a loss This is like the anti-technology or decelerationist view a little bit. It's like, it's too much. But you're not going backwards, you're actually just hitting the brakes. Yeah, and you're hitting the brakes for yourself. And ultimately, If you, I mean, we live in a world where there is competition, we, whatever endeavors we're like going in, there are people who are, have similar goals. And if they're willing to use tools like AI and like, and like capturing things about themselves and you're not, maybe you get out competed. And so it kind of becomes this like falstrand bargain, right? Just deal with the devil of like, you want some advanced capability? then you have to give up your, like some agency. Right, you gotta be like a node of Skynet. Yeah, exactly. You're now part of this, you're absorbed into this giant organism. Yeah, and actually maybe like it doesn't have to be that Right. Right, and I think this is kind of why d/acc is so compelling. And the technologies that are in cryptography, like zero knowledge proofs fully homomorphic encryption, all this stuff, is like we can actually have, like we can actually have privacy, we can actually have sovereignty. And we can still participate in like a comp, like with a complex technology in a complex society, right? Without having to make these like wicked trade-offs. And so I think, yeah, we're kind of past that point, right? It's like, I think for a lot of people, it's sort of a foregone conclusion that like to have all these capabilities that I have right now, I have to share my data in such and such a way. We've kind of, it's kind of like the lobster in the pot. It's been like a slow boil where it's like, you want to use. GPS, you know, you want to use. You're like, okay, we're just terms of service like everything away. Because for a long time, there hasn't really been a compelling alternative. Yeah. so I think, I mean, ultimately, it's why we're here. We're here to share this, these ideas, but also actively here at Build. And so it's like to actually make the changes that we're talking about. along with a massive cohort of people in the crypto community. Totally. Really the d/acc community, I'll say. A lot of cryptos, casino stuff, which is not where we're at. But I think that this is where what was really surprising to me was in going to DevCon getting to see d/acc Day, for example, and actually seeing the diverse cohort of people that showed up for d/acc Day at DevCon which is really a crypto native event in so many regards. to see how that space is broadened into people who are sharing things like regenerative technologies, open source vaccine technologies. I mean, really, it's just the different domains that are coming together to exchange ideas is really growing. And I would say that this is just a very unique kind of plot twist for the crypto space. Yeah, which I think and I guess like let's hop back in this paper because I think Vitalik talks about this a bit. Yeah in terms of the impacts of like like or like this this space being able to hold space for others with many ideas. So we kind of went on a BCI tangent, but that's okay. So there's kind of two like overall topics that we have left to discuss. One is kind of like regulations, liabilities, and the soft pause on hardware, which is very interesting idea. And the other is kind of crypto's role in d/acc I think we can maybe like jump around, maybe we can hop and talk about crypto's role in DAAC, because it just kind of came out naturally. And then we could talk about, yeah, the kind of like the section of like talking about like regulations and stuff, which I mean, again, very interesting. But yeah, when we think about... I'm just going to jump ahead and quote Vitalik a bit. The role of crypto in DAAC. Much of DAAC goes far beyond typical blockchain topics. Biosecurity, BCI, collaborative discourse tools seem far away from things that a crypto person normally talks about. However, I think there are important ties between crypto and d/acc Particularly, DAAC is an extension of the underlying values of crypto. Decentralization, censorship resistance, open global economy and society to other areas of technology. Because crypto users are naturally early adopters, there's alignment of values. Crypto communities are natural early users of d/acc technology. The heavy emphasis on community and the fact that these communities actually do high stakes things instead of just talking to each other makes crypto communities particularly appealing, make crypto communities particularly appealing incubators and test beds for DAAC technologies that fundamentally work on groups. rather than individual. Crypto people just do things together. Many crypto technologies can be used in d/acc and they're win-win opportunities to collaborate on crypto adjacent technologies that are very useful to crypto projects but also key to achieving d/acc goals like formal verification, like just computer software, hardware security and other stuff like that. yeah, and then yeah, so any thoughts? Any thoughts there? Just that like it seems a little the third, I think the third point about like crypto communities just doing things together. This is like been a new awareness for me. Sure. And this like last year, right? Between two different pop-up communities that we had the pleasure to attend and see. I think it's really true that crypto communities do just do things together. Yeah. Like with this kind of emergent space of these cohorts, people traveling together to events like Aleph in Argentina earlier this year, then Edge Lanna in Chiang Mai, or the different edge events. There is sort of this interesting thing to me that I've seen emerging, which is that I've been introduced to a couple new fields, DePIN and DeSci. DePIN is Decentralized Physical Infrastructure Networks, and DeSci, which is Decentralized Science. There is like... I was very fascinated to see what I expected versus reality. And what I expected was people predominantly just focused on developing new crypto technologies, which is true to some degree, but I was surprised how broad that net has been cast with the underlying shared interest of cryptography, not just cryptocurrency, not just Bitcoin, Ethereum, mean, really broadly. There's a whole cohort of people doing things together. And I think this is really an interesting thing just in terms of the the d/acc space where we are. See, I actually do think there is this interdisciplinary sort of thing happening where people are moving together from multiple domains and very different backgrounds. They're coming together from different parts of the world and exchanging ideas. And I think this is... It just really underscores this idea that having a diverse cohort of people working on these concepts together is really important for d/acc. Yeah. And it's also really interesting to see the projects that come out of this as well. Totally. I think there is the cross-pollination that happens in collaboration. Yeah. think kind of later in the paper, He talks about funding and stuff. Yeah, he talks about deep funding, so d/acc and public goods funding. So basically, Vitalik argues that strong decentralized public good funding is essential to a d/acc vision. Because a key d/acc goal, minimizing decentralized points of control, inherently frustrates many traditional business models. It's possible to build a successful business on open source, but in some situations, it's hard enough that important projects need extra ongoing support. And so it's important to figure out how to do public good fundings in a way that addresses some of problems. so, Vitalik proposes deep funding, which has two primary innovations, a dependency graph and AI as distilled human judgment. So deep funding uses a dependency graph. Basically, it asks local questions. instead of looking at like, hey, like, Like, should we fund this project with this giant goal? How much do give it? It's kind of asking a question. Is project A or project B more valuable to outcome C? And people kind of make a decision on A or B. So there are multiple jurors making decisions, A or B, B or C. Almost like going to an eye doctor, right? You pick one or the other. And then you have all these judgments that a human makes. they use AI as kind of like this distilled, as a way to like distilled the human judgment. It's really close to reinforcement learning with human feedback. The technique that's used to fine tune base foundation models where there you provide like some text that humans write and then AI is like, okay, I should, should, put stuff like this. And judging A or B, which is better, A or B. And then, so. doing the same thing with deep funding, it's taking all of these judgments that people have made, should we fund project A or project B? And then having multiple AI models try to take the feedback that the humans have given to distill, and it's kind of to generate a funding graph that maps best to that, these distilled judgments. And so, if you talk summarizes it as an open market of AIs as the engine and humans as a steering wheel. so, yeah, I think deep funding is a interesting idea because open source projects, can kind of see this depends on A, depends on B, depends on C, you can kind of also kind of build stuff out like that. And just like a, it seems like a really sound way to determine maybe what public goods are funded, right? Like it's taking a lot of weights into account. Yeah. And you know, I like the, the, analogy to, to, an appointment for an optician. It's like this, this fundamentally sort of makes sense to me is which of these things is more relevant to the development of this other thing. And it's just like very narrowly scoped, right? People can like actually think about that and make really good decisions. and instead of having like people. like, hey, your job is to judge 500 projects. It's like, hey, That are really broad with no context maybe. It's like, how do I even contextualize this idea? Yeah, give a million dollars for these 500 projects. Make sure it's allocated well. That's a hard task for a person to do or set judges. can instead have like, your job is to look at these 20, make these 20 decisions. Each decision really consider A or B. Right. This is a smaller scope and then you have multiple judges doing that. There's maybe overlapping data, right? It's like multiple people are judging one set. So stuff like that. I think it's a really interesting approach for funding a product in d/acc And it's really, it's a, yeah, it's a important part of this paper. And so we talked then, go talk about the future. But before we talk about the future and end this, We're going to talk about liabilities. Yes, regulations, liabilities and a soft pause on hardware. The soft pause on hardware is actually very, very interesting concept. I was really like very opposed to it. I think the soft pause on hardware argument is a little more interesting to me, a little more compelling the liability argument, but yeah, well, we can touch on that. So basically, the talk makes a case for like like we need new regulations and kind of talks about like, talks about liability as like a domain of regulations, like to figure out who should be responsible for harms with AI. You could focus on users. You could focus on the people who use AI, like the end user. You could focus on the deployers. So intermediaries who offer AI as service for people. or the developers. are people like building AI, like making foundation models like open AI, anthropic, all these guys. And so the talk argues that putting liability on users feels most incentive compatible. There's some, there's some harms here or it's not harms, like, shortfalls, guess. Liabilities on users, quote from Vitalik. Liabilities on users create a strong pressure to do AI in what I consider the right way, focusing on building mecha suits for the human mind, not creating new forms of self-sustaining intelligent life. is kind of like the building tools today thing, Like building tools versus highly autonomous AI agents to start. How those tools are utilized is a reflection of the people using them and those people using them bear some responsibility on how they use them. Yeah. Yeah, absolutely. And so it's just kind of like having... There's some risk here in that, or issues that users, individual users might not have that much money, or they might even be anonymous and there's nobody to say, hey, this It's hard to hold them accountable. The accountability part of the liability is maybe a little different. They're a little difficult here. And then to talk to us about the deployer liability also seems reasonable. It doesn't work with open source models just because they're open source. Anyone could just run them themselves. But yeah. And then, yeah, he talks about potentially putting liability on owners or operators of equipment that an takes over. So if an AI uses your tools for harm, like let's say, let's say, I don't know, you have a server farm that AI takes over and it uses it to like launch botnet attack, you're responsible because you let AI take over. So it would just kind of create this incentive to like make sure your stuff is secure. There's already this kind of incentive when it comes to hacking, like people ultimately don't, but people aren't really held responsible if their servers get hacked. Some companies maybe are, like there's like a negligence aspect, but usually it's because of damage to like their users. It's not because their servers were hacked and then used to hack someone else. And so it's an interesting approach to put liability on individuals who operate as equipment, I guess, that AI takes over. And then the other high-level strategy around new regulations is a global soft pause button on industrial scale hardware. So how would I distill this? So the idea, or do you want to try it? I mean, from my understanding of it, is that the idea is more or less that because the computing chips for this are mostly being built by NVIDIA. Sure. Yeah. It's kind of a very centralized building. Right. It's because it's being built in a centralized way that we add some sort of switch on these chips that allows us to reduce the available compute. by 90 to 99 % for one to two years at a critical period. I mean, one thing we didn't really touch on that's also echoed by Dario Amodei is that the general, kind of the general timeline for the projection of AGI that Vitalik gives and that Dario Amodei gives, it seems to be in alignment, which is around maybe 2026, so three years. And excuse me, sorry. He says 2026, Vitalik says three years. So they're actually a little out of alignment. So close. That's too close. That's literally next year. No, that's not right. That paper's a little older. So Vitalik says about three years to AGI and another three years to super intelligence. So the idea is that we could use a soft pause that decreases the amount of available compute significantly, 90 to 99 % to buy us a little time ahead of... the emergence of superintelligence, might, if superintelligence is truly six years away, then we might need that one to two years. And it doesn't actually do much harm to include this potential soft pause switch. But if we don't do it, then not having it could be detrimental. Yeah. I want to kind of touch on a little of the mechanics of, at least what the talk describes. It's this idea of having something like a multi-sig, where it requires multiple sign-offs, maybe once a week, in order to say, for really advanced chips, like the A100s, the beefiest AI So not typically consumer technology. No, nothing you have in your laptop. You have some powerful GPUs at home. Not those. This would be the highest, the stuff that OpenAI is buying. In bulk. Yeah, that kind of... that hardware would check in once a week to say, I have permission to run at full capacity? And then if they do, then they run. If they don't, then they run at 99%. This is like built in, there would be some verification. Or rather, one to 10%, right? Yeah, they'd run at some reduced capacity. Low reduced rate, yeah. And the idea is that the signers on this, like the... maybe this party that gets together is like three actors are like, I don't know, maybe like a non-governmental organization or nonprofit or something. And then two other actors that could be governments that basically their job is kind of like the nuclear switch button. It's like, do we launch the nukes? It's not, but it's not as, it's like the inverse. like, we don't want to like trigger. a defensive take. It's like, can we continue to use this technology? It's like, Hey, we need hit pause now. Everybody's saying we should pause. And then all the advanced stuff slows down. this requires centralization of where, or coordination as well, or regulation. There are things I fundamentally don't like about this to start. I guess maybe I'm considering what technologies might be built on top of these things. How wide is the effect of a pause? And this is something you could probably speak more to. I don't know. As a consumer, maybe I'm over-indexing for... what the effect of pausing this technology might have on someone like me. And I suppose really the focus is on what is the effect of not pausing it at this point. There is an existential threat at this point. Yeah, the effect of pausing is really aimed at maybe, right now there's with O1, there's innovation like test time compute. You let your models do some long running. like inference. So maybe there's like some risky inference that we've determined, like a model is like, I don't know, this is hypothetical. We determined that model doing some long range compute and we need to slow that down because once it gets to end of it, it's gonna do something that's like really harmful. And like, how do we slow it down, right? Like there's like, there's this like meme on Twitter of like, like bombing the data centers or something, right? Like it's, this is like. How do we, like what's the, what's the benign version? what is the benign version? Yeah, yeah, it's like, cool, we installed pause, we installed like a, We have a pause button. We installed a slow down, we installed a brake on all these things. We can hit the global brake. Right. And so this is kind of like the, a really interesting idea. It's the option to decelerate things at a critical moment. Yeah. Right. And the benefit of it being like distributed and like maybe, something that checks something like blockchain is that it's not like, our competition is getting ahead of us, let's pause them. It's that everyone is getting paused. It's like everyone using a certain class of chips after a certain date when this regulation goes into effect, which most of these companies will ultimately be using. They're building like Meta, Microsoft. They're trying to get huge amounts of electricity to build. huge numbers of data centers, like if those data centers all have chips that can be slowed down, then actually this is a actual mechanism that we could use to really slow things down. The complication with this that comes up is that due to kind of the, I would say like ongoing trade war with China and like limiting their ability to access certain classes of compute, they're obviously they're developing their own compute. this... could end up just being, because there's no longer this global market of compute, maybe you need then buy-in from our counterparts in other countries. But will they implement it the same way as soon as we hit pause on our stuff? Will they recognize and acknowledge that pause? Is there some bypass? Especially AI is used for strategic military stuff. so this kind of changes and complicates it a bit. But I generally think this is a, if we can pre-load the coordination efforts to kind of get this done, instead of like trying to coordinate when in the middle of a crisis, this actually is a really good. It's just like, how do we move through that quickly? sure. This is really the kind of the, yeah, some of the... the crux of d/acc because how do you get these things done fast? And which ones are the most important to focus on? And I think there is like, this is where I feel like there's like an additional D, which is like just this discernment. It's like, is the discerning move here so that we can navigate this path that we're going so quickly on in a sensible way? Yeah, discerning acceleration. We're actually discerning acceleration. Yeah, no, that's actually a really good point because it is like... Maybe we can't execute all these things at once. Or there's a lot of ideas being posited here, but how are they being... Is someone acting on them? Who's driving the charge? I would imagine that Vitalik himself, I would think, has some vested interest in moving a number of these things forward. There's a number of other thought leaders in the space that we've come into contact with over last couple of years who I think are probably also actively participating. But so one thing that at least Vitalik brings up is that these are temporary stop gaps. And like that gave a brake. Like I think it's something that would be would be great actually. But like to quote him, if something becomes possible to do on a supercomputer at time t, it'll likely be possible on a laptop. at time t plus five years later. So crazy. And so we need something more stable to buy time for. Right. And so like many d/acc technologies are relevant here. We can look at the roles of d/acc in tech as follows. If AI takes over the world, how would it do so? So it could hack our computers. So we should focus on cyber defense. It could create a super plague. So we should focus on bio defense. It convin- it convinces us either to trust it or to distrust each other. So we should focus on info defense. So as briefly mentioned above, liability rules are naturally d/acc-friendly style of regulation because they can very efficiently motivate all parts of the world to adopt these defenses and take them seriously. So like Taiwan's been experimenting with liability for false advertising, which is an interesting idea. We should not be too enthusiastic about putting liability everywhere. Remember the benefits of plain old freedom and enabling the little guy to participate in innovation without fear of lawsuits. Or we do want a stronger push to be secure. Viability can be quiet, flexible, and effective. So we've got kind of the future. Yeah, yeah. Let's end on the future. Which is interesting because Vitalik kind of sets up these... these three Ds in opposition to another three Ds. He sets up our three Ds of d/acc democratic, differentially defensive and decentralized kind of in polarity to domination, doom and deceleration. Yeah. Yeah. Yeah. And like, yeah, with this, there's like a great graphic we'll throw up. So there's like deceleration, right? A bear, if you slow down, you'll get caught. And there's this fork, there's three paths. You can go to doom, where all the existential risks, just bad outcomes with AI, because we just were not ready. There's domination, right? Where there's centralized, dominated into some sort of irreversible trap as humanity, and we can't. And there's no agency anymore for humans. Not great. Which kind of feels like doom. I mean, it's kind of the same. But I mean, doom is really like no more humans, right? Versus humans in a bad state. And then there's d/acc, which avoids all the traps and secure bright future for humanity. We want a bright future. We can have a bright future, actually. Like technology is not inherently bad. It's how we use it that determines its value for us. Right. Yeah, I think fundamentally that Vitalik's stake is a positive one. He's saying we have to be discerning. We need to move forward carefully. we are capable. Ultimately, we're capable of overcoming these difficulties and being discerning and being defensive and moving forward with technology in a way that we reap its greatest benefits and limit the bad outcomes as much as possible. Yeah, think just even like touching on like his ending remarks. So the next few decades bring important challenges. There two challenges that have recently been on my mind. Powerful new ways of technology, especially strong AI, are coming quickly. And these technologies come with important traps we need to avoid. It may take five years or 50 years for artificial superintelligence to get here. Either way, It's not clear that the default outcome is automatically positive and there multiple traps to avoid. And the second challenge is that the world is becoming less cooperative. Many powerful actors that seem at least sometimes to act on high-minded principles, that's cosmopolitanism, freedom, common humanity, are now more openly and aggressively pursuing personal or tribal self-interest. so, yeah, so, Each of these challenges have a silver lining. We have powerful tools to do our main work more quickly. It goes into things that are happening like advanced biotech, writing software is easier than ever. Basic research understanding of viruses is really grown. Computing biotech are enabling synthetic biology tools you can use to adapt, monitor, your health. Yeah, there's just a lot of like, there's a tailwind of good that's happening technologically. And enablement. mean, there's people, mean, with the advent of many of these tools. You know, we're seeing people from varying domains able to participate in other domains in a way they never have before with the help of AI. So, yeah, think his signing off remarks are pretty positive here. I'll read his last statement here. We humans continue to be the brightest star, the task ahead of us of building an even brighter 21st century that preserves human survival. and freedom and agency as we head toward the stars is a challenging one, but I am confident that we are up to it. Yeah. So are we. We're up to it. I think that's why we're here. That's why we're sharing these ideas. That's why we're building. Yeah. Ultimately we see the future as a good one and we're excited to be part of it and to shape it. I guess, especially with the values we have, you know. Yeah, I think in further episodes, maybe we'll go a little deeper into Tiny Cloud and what sovereignty looks like and some of our values around sovereignty and how we think that there is in fact a path forward to achieve personal digital sovereignty. And also we want to talk about all kinds of things, not just sovereignty, not just agency, like cybernetics. What does it look like to be part of this? giant organism that is technology and humans combined. how do we maintain the values that we have today, the freedoms and abilities to do things? How do we maintain privacy? How do we grow technologically and mature into whatever humans are growing into in a way that honors where we've come from and what we are? So I don't know, I just kind of want to talk about some of this stuff and ultimately really excited. So thank you for listening. Thank you for joining us. Thanks for staying with us. And until next time. Thanks for tuning in. Alright? Cool. We did it. We did it. no, wasn't recording.

People on this episode