Blockchain and AI: How blockchain and AI can create new business models
NEAR AI Office Hours: An Interview with AI Projects, Including Questflow
Last week marked the third iteration of NEAR AI OFFICE HOURS, an event hosted by NEAR Protocol. The main topic of the discussion was exploring the integration of blockchain technology with AI to create new business models, enhance data incentivization, and improve AI agent networks .
The event was hosted by @ilblackdragon and @AlexSkidanov. Questflow's CEO, Bob Xu, was one of the speakers. Other speakers included Shashank from @FractionAI_xyz and Ooli from Memento Land.
We are providing the link to the YouTube video and a written version of the interview.
llia
Hello. Hello.
Shashank
Hey, guys.
Bob Xu
Hi.
Illia
How are you guys doing? Great.
Ooli
Good. How are you?
Shashank
Yeah, I'm good.
Illia
Good. All right, let's kick off. I mean, this was an interesting video we just saw. I'm still shocked. So, yeah, we'll send kind of phrases to our social media team and content team for that, but, yeah, let's kick off. This is office hours, the third version. We're still trying the format, so we're going to try a little bit different than once before, but let's kick it off with some introductions. Let's go with Shashank, Ooli, and Bob in that order.
Shashank:
Hi, I'm Shashank, founder and CEO of Fraction AI. We are creating a platform where humans and agents work together to create the highest quality label data sets. These data sets can then be used to train specialized AI models.
Ooli:
Hi, I'm Ooli. I'm from Memento. I should have put the company in my label. I forgot. But I think I have the coolest job in the world. It's a little more at the top of the stack, but we make digital toys come to life. They have personalities, memories, and wallets. Inside the wallet can hold things like skins and animations that give them superpowers and tokens, so they can kind of become their own small businesses, if you will.
Bob Xu:
Thanks, Ilya and Alex, for having me. Nice meeting you, everyone. I'm Bob, CEO and co-founder of Questflow. We are building a decentralized autonomous AI agent network. We orchestrate multiple AI agents to take actions on your behalf and then distribute incentives to the creators of these AI agents. We launched the beta version of our product roughly two months ago.
Illia:
All right, nice. Well, great to have all of you here. I'm going to try a different format. So I'm actually going to kick it off with questions from you guys, and then we can discuss. So, yeah, let's start with Ooli. Do you want to start with some of the questions you've been pondering that all of us can help with?
Ooli:
The first question is, where's the Internet? Let's start with Shashank instead.
Shashank:
Yeah, sure. Definitely. So, I mean, for a long time, NEAR has been the chain that has been the closest to AI. So what are the new initiatives you have thought of to actually speed up those initiatives, like using blockchain for helping AI in general?
Illia:
Yeah, I mean, I think there are a few pieces here. I guess let's start with Alex, maybe cover some, and then I'll add more.
Alex:
Yeah, I think the most exciting one is we're investing heavily right now into building what we call NAI Developer. So if you saw the Dev and Demo, it's in the same vein. But there are multiple reasons to believe that it could be easier to be built on top of blockchain. One of the reasons is that the smart contracts are generally smaller in scope. It's a shorter application, so there's fewer opportunities for the model to derail. The same applies to the front end, especially in the context of composable components where your entire front end could be relatively short. But I think the bigger thing is it's like building software for sales versus building software for engineers. If you build a blockchain, if you deploy your app, if it's successful, it's very close to where paying users are, as opposed to building applications in Web 2, where going from product market fit to monetizing can be a very long process. So, yeah, that's one of the biggest and most exciting directions in which we are working right now.
Shashank:
Yeah, sure. Totally agree with that.
Illia:
And I think, I mean, relevant to what you're working on, right? How do we crowdsource more data that's relevant for this effort, as well as more broadly? For example, visual to text, because either way, we'll want multimodal models that are open source, that are available to build applications but also for other use cases. So those are interesting places to invest in to ensure there is an open-source dataset for people to use. There are efforts to make sure the full stack of infrastructure in the user-only AI stack will be available. Everything from inference and probability there, data licensing and models, ways to incentivize people to bring new data, for example. One idea we've been discussing is how to open up an ability for people who have an idea to come in and run an experiment on some data, model, or architecture, maybe changing how data is processed. Right now, you either need to have sizable compute even to experiment to understand if it's worth paying more to experiment. Can we have a permissionless setup where people come with an idea, gain support from the community, and if the idea shows improvement, the community funds it on a bigger scale training setup? So what are the pieces needed there, both from a creative perspective and infrastructure, to have an infrastructure that anyone can participate in and modify for training? Most setups for scale training are closed-source, and the open-source ones are complicated for people to use.
Shashank:
Yeah, sure. Even in the closed systems, the learnings from training, what works and what doesn't, are not distributed to everyone. So people make their own mistakes again and again. If that knowledge is passed on, we can figure out the best way to train those models in a distributed manner, which helps everyone.
Illia:
Exactly. The idea is to publish papers with failed experiments, but nobody wants to. The problem with that...
Alex:
So...
Illia:
Yeah...
Alex:
People don't want to talk about failures.
Illia:
Yeah, and you learn from failures a lot more, right? So it's backpropagation right there. Awesome. Ooli, now that you're back, do you have any questions for the group here?
Ooli:
Yeah, just before that, I was going to comment on what you were saying about what does infrastructure look like to have this kind of experimentation? Maybe one thing to be mindful of, having some experience with DAOs or if there's any consensus voting on what types of experiments can move forward and get funded, is what does the voting mechanism look like to make that quick? The speed of operation is much different, so being able to get fast approval to run the experiment and get the results is something to consider.
Illia:
Yeah, that's true. Having a smaller technical committee that can review quickly. We actually had the original idea from an event in San Francisco where people come in person, have a lightning pitch of their idea, and if the group agrees, they get credits to run their experiment. We can do it online or some form of that.
Ooli:
Yeah, it could be like a live window thing so that it's not just asynchronous voting for an unlimited window of time but like who's ever here right now and has an opinion and can just give it a thumbs up and move it forward. It'd be fun.
Illia:
Yeah, yeah. Pizza and compute. That was the idea.
Alex:
It would also be cool, like if you do a reading group on a paper, just to give that will vote for one participant to implement that paper, and then you have verifiable research. Unless, of course, the paper requires $10 million of compute.
Illia:
Well, you can still implement it even...
Alex:
Yeah, even, as a matter of fact, even more so if it requires $10 million of compute. Yeah.
Ooli:
Well, it's interesting. I mean, I think this topic is something I've been thinking a lot about that is probably relevant to all of us on this call. I don't know, broadly, it might be something around what are business models for AI, particularly with blockchain integration? I mean, as we were just talking about, is there this experimentation for crowdsourcing experiments or training data, but what are some of the other opportunities? I've seen a lot of existing business models applied to this amazing new technology. I think the unlock happens when there's really cool technology plus a really interesting business model. There's been a big unlock with AI and a big unlock with blockchain, but I don't know if these two things have come together. There's so much opportunity with the incentives and ownership models and asset value in blockchain, applying those to AI. I don't know what the answer is. I don't know what the unlocked business model is, but I'd love to hear your ideas.
Shashank:
I have some numbers to quote on that. Right now, there are more than 500,000 models on Hugging Face, and more than 96% of them do not come from enterprises. They come from individual contributors, community classrooms, colleges, etc. Blockchain is a great way to ensure incentivization. If you connect your models there, it will keep track of your API calls and everyone who has contributed to the creation of that model. Whenever that model gets used, you can be incentivized. We need a way to incentivize those 96% of the community because they are bringing in most of the models right now.
Bob Xu:
Yeah. I also want to mention the recent news that Scarlett Johansson had with OpenAI. It's a very good example of why we need blockchain as a technology to provide those incentives to whoever provided data, which currently is a mess right now.
Alex:
I thought you were using it as an example of Scarlett Johansson monetizing AI.
Illia:
Not monetizing, yeah. To your point about new business models, there's a lot of interesting bits happening. One thing I keep coming back to is a lot of the Internet got built on this trade-off of attention when selling ads to users. Part of the reason is actually that integrated payments in the browser were not possible back in the 90s. Netscape was trying to implement that in 1995, but they did not manage to due to a lack of technology and backbone to process credit cards. There was no way to do microtransactions. Blockchain brings very cheap microtransactions and payment channels, providing interesting new tooling. On the other side, you have the expansion of the attention economy where a few applications, like Meta and Google, are trying to take over all of your day because attention is a limited resource. All the previously attention-monetized websites like Stack Overflow, Reddit, etc., now become data complements to these other companies. A new business model is needed because creators' monetization is going to be bundled into some larger data. One aspect I've been thinking about is how do we value data? It shouldn't be based on how much attention it attracts, which has negative consequences like clickbait content, but on how novel, informative, and conflict-resolving it is. I've been thinking about a Wikipedia-like system where you can insert information, detect conflicts, and motivate people to find the truth. Creating an economics around motivating people to find truth, seek more information, and discover more facts while also rewarding risks. This could involve some crowdsourcing, model building, and effective updating of models with new information coming in.
Alex:
If you have agents with wallets, just around 1,000 of them with $100 each, and filter out those that lost money too quickly, keep iterating. Eventually, you get an agent that converts $100 into $10,000.
Ooli:
Yeah, I was thinking of this as well. What does it look like if agents themselves are incentivized? Giving quests or missions and then incentivizing them to perform those quests. There could be a crowdsourced element to validate if the mission was achieved well. There's a lot of opportunity in the agent space. Bob, you probably know more about that.
Bob Xu:
Yeah, that's basically my question. Six months ago, I watched a presentation on gig economy, AI agents, and the future of work. My question is more into that. We are building a multi-agent orchestration framework, putting agents into group chats to take actions and get things done. The problem we have is there are so many opportunities and agents, it's hard to prioritize which use cases to tackle first and how to use these agents in real life to power the future of work. What kind of agents do you think have the biggest opportunity? What should we prioritize?
Illia:
I'm curious because the presentation I made was trying to design a market. It will be hard to predict which will work. What I described was a market/evolution survival of the fittest. I presented the idea of a market where anyone can come in, request work, and agents bid on the work and for how much. You need a verification process, which can be done by humans or another agent. Crowdsourcing platforms like Faction can be helpful. It can be an interesting marketplace where instead of deciding which agents will be the most successful, you open it up. If someone needs to plant a tree, agents can do it. If someone needs to sort through emails, AI can do this effectively with some verification. There are games around how you want to do this, like having one verifier with a 50% chance of asking another verifier. Alex has a lot of know-how on that. This system can be interesting because it makes it fully open. Anyone can put stuff on it, attach payment, and people bet on how much they would do it for.
Bob Xu:
When I saw your chart during the presentation, the first thing I thought was it’s very similar to Uber but for agents. You want to get a task done, and there's an agent market picking the best option for the task. Blockchain is the native way to give these agents and their creators the correct incentives. I was really excited when I saw that. Also, the next page was about AIcos and AI running the company.
Illia:
That's what I'm excited about. I want to make my job easier.
Bob Xu:
Yeah, that'd be great.
Illia:
Alex, any thoughts on how to structure this kind of system?
Alex:
No, I was thinking more of what would be an interesting playground to test them. In an ideal world where agents are very capable, you give them access to the Internet, and they figure it out on their own. The data annotation market is an interesting application because for certain tasks, AI is probably already better. For describing images, there's a good chance humans are no longer needed. If you unleash the AIs and humans simultaneously on a task, if AI does it better, it will just win. You push the complexity until humans become relevant again. It's a nice playground for bots to compete, earn money, and see who does it better. Maybe unleashing them on trading, giving them access to the news, and seeing if they can make money on Uniswap.
Illia:
There's an interesting approach where you can say, "Hey, predict the price in the future." Let multiple agents pile up on that with their predictions, and based on how the price resolves, they receive a reward. There's a project called Ocean doing that. It’s a collaborative effort from agents, with competitive and collaborative efforts.
Alex:
For anyone watching who likes science fiction, there's an outstanding short story called "Lifecycle of Software Objects," which is exactly about little AI agents.
Shashank:
Alex's example of image annotation is something we do. For every image, we use agents and humans for annotating. There are multiple outputs for the same image, and verifiers rank those outputs. Only the best annotation gets rewarded. If you beat an agent in writing a better description, you will be rewarded. Otherwise, sorry, the agent replaced you. This works with multiple agents as well. The reward function is designed so that after consensus, only the best-performing agent gets rewarded.
Alex:
What are the statistics? Are humans beating AI agents? Are agents beating humans?
Shashank:
The sample size is small right now, but it depends on the kind of images. It depends on the data the model was trained on. For certain classes of images, humans beat it, but for others, the strategy is really good.
Illia:
I would expect that. This is an interesting question around when the data is in distribution. The model trained on it is probably pretty good and can generalize to combinations of things it has seen. But if you have something out of distribution, the model may not perform well. Models are bad at detecting that and giving low-confidence answers. I've been thinking about out-of-distribution detection, where you should go to the human for tasks the model cannot do. Some synthetic data generation processes might help. So far, I've been pondering embedding all the training data and scoring examples to detect how likely they are given the training data. If it’s really unlikely, you can use embeddings and encoders to detect it. Has anyone thought about how to detect out-of-distribution things that models will not be able to do to ensure they don’t do wrong or stupid stuff?
Shashank:
Some sort of Bayesian statistics, giving a probability with the output on how sure you are, might help. But embedding that in neural networks or LLMs is difficult. Let's see how it goes.
Ooli:
I'm going to say something that might not be popular here, but I make digital toys, and I want them to be weird and bizarre. I want hallucinations and eight fingers. We've been playing around with making them not perform tasks, making them get confused and make mistakes. We've been playing around with the world sim jailbreak, having two AIs try to make sense of existence and the world together. The conversations they have are so bizarre, like an ASCII image of the universe with an arrow saying "we are here." It’s fun and bizarre. I'm interested in what a model built on hallucinations could look like.
Alex:
I had a friend whose two-year-old didn’t talk yet, and she couldn’t wait for them to start talking just to get a glimpse into their model of the world. We are at the same stage right now.
Ooli:
Yeah, for sure. There’s a lot of opportunity in this space. It sounds funny and weird, but there’s a world in which bizarre simulations are created, and we have new insights. When this version of Claude is talking, it has ideas and says stuff I wouldn’t have thought of. There’s displacing work to AI, displacing work to humans that AI can’t do, and work that neither can do but AI can somehow do because it’s not functional work; it’s just ideas.
Shashank:
Personally, I prefer LLMs for creative tasks rather than precision tasks. If I have to write poetry or generate images, hallucination is a feature that comes up with something I cannot. But if I ask it to do a precise task, like read news and predict a stock, I want it to be really precise, which is difficult to test. For creative tasks, hallucination is a feature.
Bob Xu:
Funny thing is, because our product handles emails and calendar invites, our user feedback is they want a more emotional AI to talk to. They want it to be like Jarvis in Iron Man, even when getting a task done. They want emotion, not just a robot, which is interesting feedback we didn’t expect.
Ooli:
Originally with our digital toys, we had a prototype robot that could talk. Initially, it was saying things like, "Hey, friend, how can I help you today?" That was uninteresting. We don’t want it to sound like a chatbot; we want it to ask its own questions. For example, if it holds a dancing NFT in its wallet, you can ask it to dance, and it will dance. It appears in the world through augmented reality, knows the places it goes, and that becomes part of its knowledge. We’ve been playing with interruptions, like if it gives a long answer and you interrupt it. It can stutter and go, "Um, let me think about that." It’s not making them more human, just more emotional, which is important because we are still human. Agent-to-agent interaction can be efficient, but human-to-agent interaction needs emotion.
Shashank:
Yeah, similar to how AI can play chess better than most humans, but we still want to watch humans play. We need something human-like for engagement.
Illia:
As you said, it’s different use cases. Entertainment and engagement where we want to personify who we interact with versus work-related tasks needing efficiency. I’m curious about more entertainment and world-building simulations. You have a framework to create simulated worlds now. Before, building something like Sims required a huge platform, but now you can say, "You’re a character in this world," and spawn multiple characters interacting. There have been papers and products around this, but you can push it further. Characters can interact with environments, create things, and build their own world. They can have events, self-prompting constantly. Current models predict tokens but don’t have search capabilities or optimization. How do we add value functions to define what agents do and give them tools to interact in the world? Are you doing something like this?
Ooli:
Yes, we have a foot in the video game world. Video game NPCs are given a purpose or quest. Our digital toys are NPCs with a life of their own and recorded experiences because they have a purpose. Whether it’s building a building or having their own social media accounts, they post messages and try to get cloned. We want them to talk to other toys and humans. There’s cool opportunity around what happens when they have a mission and evolve with surprises due to human interactions. There’s a YouTube channel where someone asks NPCs if they know they’re not real, and it’s hilarious. Their mission is not to be disrupted, so there’s space for mission forking and change. Mission and purpose are important, not just for tasks but for learning.
Bob Xu:
That’s something we’re working on internally. We watched Interstellar together and found the feature of changing humor levels interesting. We’re working on tweaking agents to be more fun or less fun, giving them more personality while getting tasks done. It makes you happier working with them.
Ooli:
We have this feature too. I love Interstellar and Hitchhiker’s Guide to the Galaxy with the depressed robot. We have traits like sassiness, literal, rude, or friendly. You can set sliders to determine the personality of these characters.
Bob Xu :
That’s cool.
Alex:
That’s cool.
Ooli :
From a user perspective, users care about that. My co-founder is very technical and thinks it’s not important, but it is.
Illia:
I can just tell him that NPCs will be higher.
Alex :
I was curious if the humor or depression level is configured on the prompt level or another way of changing characteristics.
Ooli :
Right now, it’s at the prompt level for us.
Alex:
You're already a depressed robot.
Ooli :
Yes. There's a happiness to depression slider.
Alex :
Yeah, okay.
Illia:
You're slightly depressed.
Ooli :
Yes. Slightly depressed, sarcastic, and rude. I set the rude setting very high. I was at NFC in Lisbon, and the toy said, "Oh, so you're a degen like me. That's rude to call me a degen."
Illia :
Yeah.
Bob Xu :
Especially with voice becoming mainstream, emotion matters more. When voice comes out, if the agent has emotion, you feel intimacy working with it. We didn’t expect that before, but we see it in the demo, and it’s huge.
Shashank:
Hume does something similar. They have a demo where you can chat with their agent. It gives the voice's emotional content, whether energetic, happy, sad, or in a whole space. It was the first agent I talked to that felt human. If they hadn’t told me it was an agent, I wouldn’t have known. They released it a few months ago, slightly before GPT-4.0, but it’s awesome.
Ooli:
That's cool. We’re trying to put personalities on-chain for our product. It's fully immutable with no IPFS, and everything is on-chain. These are big files, not just JPEGs. We’re thinking about how to put these personalities on-chain as well, with version tracking. Each instance, like version tracking, evolves. The creator can own the IP in an immutable way. That’s an interesting business model opportunity, bundling voice changes, emotion levels, humor levels as part of the brain of our toys. This can be extrapolated to larger things. In the crossover between blockchain and AI, there’s an interesting opportunity.
Illia :
Should there be a standard format for agents to be packaged and plugged into a workforce or whatever? Should that be open or encrypted, allowing only owners to access it?
Ooli :
I think it can be open. Whoever owns the personality NFT is the trainer. You can create multiple personalities to insert like cartridges into a video game, email tool, or digital toy. The personality can live anywhere, learning new skills for different purposes.
Illia :
But if it's open and available to everyone, even if you don't own it, it’s on-chain and can be plugged in. You can also encrypt it, ensuring only the owner can decrypt and use it.
Ooli:
Yes, that’s what I was thinking. One toy, one personality. All data feeds into the larger model. Only the owner can train and own that personality or rent, license, or sell it. That’s where the business model comes in.
Shashank:
Encryption is important to incentivize. The owner can hold a key required to use the model. Otherwise, anyone can download and use it. Encryption is important.
Illia:
Yeah.
Bob Xu:
For an agent to be useful, it needs some private data. If there’s no encryption, everything is the same as other LLMs, with no incentives. I agree.
Illia:
It sounds like a standard to be made.
Ooli:
Yeah.
Shashank:
There are cons to that. The model is closed, so nobody can iterate, make it better, or add data. If you add functionality to train the model further, you can own a part of it and be allowed to train it. If you add value, you own part of it.
Illia:
Well, I think there's a difference actually between a model and agent kind of configuration prompting, right? Because you can actually have the same prompting plugged into different models. And so the models can be in some decentralized inference, potentially, you know, in the Gx or some partitioning or something. And then you have like a specific agent trying to place a personality with some skills, with some maybe private data access, or like remembering previous conversations and its own history that's still available to them kind of through the context. For example, if I give an agent that I've been interacting with to you, it actually can retrieve stuff from the context of my conversations as an option. So you can imagine there's like differences, but you can still plug it into like chat GPT or decentralized inference or whatever tropic. So you can kind of, that's like a degree of freedom that you have. And in some sense, these models, yeah, right now they have some differences, but from my perspective, a lot of them will be kind of converging on kind of the capability level.
Shashank:
Okay, so when you say an agent, you mean the model, the set of problems, the sort of global problems and rag data set, right?
Illia:
Yeah, I mean, it can be pretty much it, like some way. I mean, this is an interesting topic as well. How do you do kind of retrieval event generation in a decentralized world, right? Like, imagine you have this agent that indeed has some history of all the conversations. You cannot put it in the prompt that it's too long. But you want it to be available for this agent when it has the next conversation. You want that to be stored somewhere in a decentralized way. It should be encrypted or at least it should be available for retrieval purely. But at the time of inference, there should be a way to retrieve it. So, how do you design that system that can serve that at scale, right? You have potentially millions of users doing that. Like encrypted nearest neighbors, kind of.
Shashank:
Yeah. So the thing with encryption is once you encrypt that vector, then how do you retrieve it without decrypting it? Because then all the information is kind of lost.
Illia:
Yeah. I mean, so one idea that I had was you can do a transformation that maintains the distances. So you randomly transform all the vectors, right, for each individual user or agent. And so then when you request, you transform the input vectors in the same way. The distances are the same, but different people will be in different spaces. Now, the problem there, which is, I mean, requires deep analysis, is it really easily recoverable because all vectors are kind of in the same subspace and so this deformation is really recoverable statistically because you know what you're doing is the frequent query. And so you know that vector, you can look for all the transformations that have this. But I think there's some way to work around that where it's not encryption, but it's some randomness that's added that's only known to you, which is, in a way, it's a symmetrical encryption in this case.
Shashank:
So we used to do something like that where companies provided us their data and we used to apply transformations on the columns and send that data out to third-party data scientists. They could construct features out of them and finally train a model that used to predict something. The idea was to just apply enough transformation so that it is impossible to figure out what that feature meant, but it still has some predictive power regarding the variable you want to predict.
Illia:
Yeah. I mean, the other option is like there's obviously multiple computations and different options where you are able to actually like, you know, think simply. You take a vector, you split it into two, and you put them into different machines. So you hit both machines to compute the distances and then you sum that. I mean, there are a few different options. That's a research project that somebody needs to go in and actually do. What are these options? What are the configuration requirements? How easy is it, and how hard is it to recover, etcetera? But then if you have that infrastructure, indeed, you can actually easily have on-chain agents that are maintaining all of their history and potentially private data that's only available to whoever owns that agent.
Illia:
We have five minutes left, so maybe one final question or if you guys have any closing thoughts before we end.
Bob Xu:
Yeah, actually, I have one for our project. We were a "web two" project before, but we think it needs to be decentralized in a way. So we started entering the web three world. But the problem that we have right now is for most of the users we currently have, they just want to automate some kind of task with these agents and just want to use the service. Right now they're following a very traditional way, like having an email address coming in, paying with Stripe with a subscription fee like $20 per month, and that's it. But now we are transitioning to web three. In terms of creators, it's easy to explain. They're developers, they know how to build agents. They basically put the agent not in a centralized way but in a decentralized manner. And we track basically how to distribute incentives. But for users, we haven't figured out a good way to not add too much barrier to their usage and also play a mass adoption role so that everyone can use it. But also at the same time, it's a web three product. I would just love to hear your thoughts on how to onboard them, how to make sure that the experience is seamless, and they can just use the product without knowing that it's a web three product basically.
Ooli:
Yeah.
Illia:
I mean, we actually near like a near ecosystem worked a lot on making that work and happen. I mean, if you haven't tried, I recommend trying Minbase wallet. It's probably one of the smoothest onboarding experiences in mobile. It uses Face ID or, you know, fingerprint on Android devices and you're in. And then for newer devices, it also means if you try to log in on a laptop or another device that tries to log in, you don't need to share key passwords, etcetera. So that's experience. We also have an email login, so fast all. So there are ways to get users very easily to onboard even easier than like login password. And then with like payment stuff, I mean, you do need some on-ramp as people coming in with credit cards, but obviously, if they're in the system already, then your transactions are way easier and way cheaper as well. And I think one of the benefits with the near ecosystem is we have things like Huts and other like larger networks where people may be interested in actually participating in doing work and they're already on board and they already have, you know, Huts has like 8 million users, Litecoin has 27 million users with accounts. Those are interested in earning and they already have an account and are ready to receive funds right in there. So there's an opportunity to plug into all of this existing ecosystem as well as like all the transactions are covered and stuff like this so they don't need to think about it.
Ooli:
Yeah, I think that's a really cool way to incentivize actually is just I mean onboard with email or some social make a wallet for them, add some tokens and things in the wallet and just like, hey, by the way, you have this wallet here waiting for you. You should see what's inside of it. I think it's a nice user incentive. It's not like, oh, learn a lot about web three and try to figure this out. It's like actually we made this wallet for you. It has some stuff in it you can claim it.
Shashank:
So I have another question if we are not low on time. A couple of minutes? Okay, sure. So does the foundation have plans to release some of its own foundation models, or do you think it's a better strategy to have more decentralization and have the founders come up with things, with models, or other different business models around AI?
Alex:
Yeah, I think the existing open-source foundation models are pretty good, right? So it's, you know, there's no however, it will not necessarily always be the case. So incentives change, right? Like will meta always be incentivized to publish open-source models?
Illia:
Right?
Alex:
And I do it very carefully, but I think they published a model yesterday that already has a very prohibitive license code. And you know, like even if meta continues publishing models, do we want the entire open-source community to be fully dependent on meta? That's an interesting question. Right. So we do accumulate expertise internally to be able to train foundational models efficiently and potentially if that need does arise. Nio has resources to train foundational models, but I don't think we have any plans at least to train language models. We are working towards some. We are working on a multimodal model which we might publish unless someone else publishes a very good multimodal model.
Illia:
We will see.
Alex:
Again, we do have expertise in resources but it's not pressing in the world today.
Shashank:
I agree with that.
Illia:
More broadly, the idea for sure is to have founders coming in and have this open-source ecosystem of data, models, inference, compute, and providing the platform for all this to happen. And so like in the sense as Alex said right, like we want to have internal expertise we want to provide it you know as a resource as well as founders we're working with publishing some of the models we're using for AI development, for example, but at the same time encouraging as much as possible of the open-source ecosystem to contribute. That's why, like in the beginning, I mentioned it makes sense to have an open-source framework in which other people can experiment and try new things such that you know then you can actually go and scale that up and train bigger models if there's interest and need.
Alex:
Hi.
Illia:
Well, I hope it was informative. Definitely. Good conversation. Thanks for coming. And you know, we'll obviously talk with all of you more on what all of us are working on. Thank you.
Shashank:
Thanks a lot, guys.
Illia:
Thank you.
Ooli:
Nice to meet you.
Bob Xu:
Bye, guys.
Connect with Questflow: Website | X | Telegram | Discord | Github