Background:
AI is hyped. But there are few people building AI dapps. There is little AI Dapps development tutorial on the internet. We will discuss how to leverage existing, near future Ai infra to better decentralized AI applications.
Outline:
Although the obvious difference between AI Dapp and Dapp is Ai inference, it poses significant challenges to existing infra and development. There are also significant challenges to develop and maintain the models.
Data collection
Data labeling
Giza datasets
- Using ChatGPT or Web2 inference as a service
- Oasis (TEE), ICP, Repill( GPT wrap service)
- Host models by yourself
- Io.net, Akash
- Can also host ZKML services. Giza, EZKL, Modulus
- Onchain models/agents inference
- Ritual, ChainML, Morpheus, Olas,vana ( we can test some benchmark), flock.ai, bagel,ao, Livepeer
- Beside the core compoenents, some necessary components include
- If you want to build RAG, you need db like firstbatch, bagel
- Model storage: Filecoin, AR, 0g
- Model training: Gensyn, Bittensor, flock.ai
Ending questions
- Which direction developers will pursue
- how it differs from transitional AI development
- How it differs from smart contract development
The Crowded AI spaces still have room for one more genius
拥挤的 AI 领域仍旧给开创者留有空间
同质化 AI 基础设施的出路在哪儿?