📢 Gate Square Exclusive: #WXTM Creative Contest# Is Now Live!
Celebrate CandyDrop Round 59 featuring MinoTari (WXTM) — compete for a 70,000 WXTM prize pool!
🎯 About MinoTari (WXTM)
Tari is a Rust-based blockchain protocol centered around digital assets.
It empowers creators to build new types of digital experiences and narratives.
With Tari, digitally scarce assets—like collectibles or in-game items—unlock new business opportunities for creators.
🎨 Event Period:
Aug 7, 2025, 09:00 – Aug 12, 2025, 16:00 (UTC)
📌 How to Participate:
Post original content on Gate Square related to WXTM or its
AI Layer 1: Building the on-chain DeAI ecosystem's blockchain underlying infrastructure
AI Layer 1: Finding the On-chain DeAI Fertile Ground
In recent years, leading tech companies such as OpenAI, Anthropic, Google, and Meta have been driving the rapid development of large language models (LLM). LLMs have demonstrated unprecedented capabilities across various industries, greatly expanding the realm of human imagination, and even showing the potential to replace human labor in certain scenarios. However, the core of these technologies is firmly held in the hands of a few centralized tech giants. Armed with substantial capital and control over expensive computational resources, these companies have established insurmountable barriers, making it difficult for the vast majority of developers and innovation teams to compete with them.
At the same time, in the early stages of rapid evolution of AI, public opinion often focuses on the breakthroughs and conveniences brought by technology, while the attention to core issues such as privacy protection, transparency, and security is relatively insufficient. In the long term, these issues will profoundly affect the healthy development of the AI industry and social acceptance. If not properly addressed, the debate over whether AI is "good" or "evil" will become increasingly prominent, while centralized giants, driven by profit motives, often lack sufficient motivation to actively tackle these challenges.
Blockchain technology, with its decentralized, transparent, and censorship-resistant characteristics, offers new possibilities for the sustainable development of the AI industry. Currently, numerous "Web3 AI" applications have emerged on some mainstream blockchains. However, a deeper analysis reveals that these projects still have many issues: on the one hand, the degree of decentralization is limited, as key processes and infrastructure still rely on centralized cloud services, and the meme attribute is too heavy, making it difficult to support a truly open ecosystem; on the other hand, compared to AI products in the Web2 world, on-chain AI still has limitations in model capabilities, data utilization, and application scenarios, and the depth and breadth of innovation need to be improved.
To truly realize the vision of decentralized AI, enabling the blockchain to securely, efficiently, and democratically support large-scale AI applications while competing with centralized solutions in performance, we need to design a Layer 1 blockchain specifically tailored for AI. This will provide a solid foundation for open innovation in AI, democratic governance, and data security, promoting the prosperous development of a decentralized AI ecosystem.
Core Features of AI Layer 1
AI Layer 1, as a blockchain tailored specifically for AI applications, is designed with its underlying architecture and performance closely aligned with the demands of AI tasks, aiming to efficiently support the sustainable development and prosperity of the on-chain AI ecosystem. Specifically, AI Layer 1 should possess the following core capabilities:
Efficient incentives and decentralized consensus mechanism The core of AI Layer 1 lies in constructing an open network for sharing resources such as computing power and storage. Unlike traditional blockchain nodes that primarily focus on ledger bookkeeping, the nodes in AI Layer 1 need to undertake more complex tasks, not only providing computing power and completing AI model training and inference but also contributing diversified resources such as storage, data, and bandwidth, thus breaking the monopoly of centralized giants on AI infrastructure. This raises higher requirements for the underlying consensus and incentive mechanisms: AI Layer 1 must be able to accurately assess, incentivize, and verify the actual contributions of nodes in tasks such as AI inference and training, achieving both network security and efficient resource allocation. Only in this way can the stability and prosperity of the network be ensured, while effectively reducing the overall computing power costs.
Excellent high performance and support for heterogeneous tasks AI tasks, especially the training and inference of LLMs, place extremely high demands on computational performance and parallel processing capabilities. Furthermore, on-chain AI ecosystems often need to support diverse and heterogeneous task types, including different model structures, data processing, inference, storage, and other varied scenarios. AI Layer 1 must deeply optimize the underlying architecture for high throughput, low latency, and elastic parallelism, while also providing native support for heterogeneous computing resources, ensuring that various AI tasks can operate efficiently and achieve smooth scaling from "single-type tasks" to "complex diverse ecosystems."
Verifiability and Trustworthy Output Assurance AI Layer 1 not only needs to prevent security risks such as model malfeasance and data tampering, but also must ensure the verifiability and alignment of AI output results from a fundamental mechanism perspective. By integrating cutting-edge technologies such as Trusted Execution Environment (TEE), Zero-Knowledge Proof (ZK), and Multi-Party Computation (MPC), the platform can allow every model inference, training, and data processing process to be independently verified, ensuring the fairness and transparency of the AI system. Meanwhile, this verifiability can also help users clarify the logic and basis of AI outputs, achieving "what is gained is what is desired" and enhancing users' trust and satisfaction with AI products.
Data Privacy Protection AI applications often involve sensitive user data, and data privacy protection is particularly critical in fields such as finance, healthcare, and social networking. AI Layer 1 should ensure verifiability while employing data processing technologies based on encryption, privacy computing protocols, and data permission management, to ensure the security of data throughout the entire process of inference, training, and storage, effectively preventing data leakage and abuse, and alleviating users' concerns about data security.
Powerful ecological support and development capabilities As an AI-native Layer 1 infrastructure, the platform not only needs to possess technical leadership but also to provide comprehensive development tools, integrated SDKs, operational support, and incentive mechanisms for ecological participants such as developers, node operators, and AI service providers. By continuously optimizing platform usability and developer experience, it promotes the implementation of diverse AI-native applications and achieves the sustained prosperity of a decentralized AI ecosystem.
Based on the above background and expectations, this article will provide a detailed introduction to six representative AI Layer1 projects, including Sentient, Sahara AI, Ritual, Gensyn, Bittensor, and 0G, systematically sorting out the latest developments in the field, analyzing the current state of project development, and discussing future trends.
Sentient: Building Loyal Open Source Decentralized AI Models
Project Overview
Sentient is an open-source protocol platform that is building an AI Layer 1 blockchain. The initial stage is Layer 2, which will later migrate to Layer 1 (. By combining AI Pipeline and blockchain technology, it aims to construct a decentralized artificial intelligence economy. Its core objective is to address issues of model ownership, call tracking, and value distribution in the centralized LLM market through the "OML" framework (Open, Monetizable, Loyal), allowing AI models to achieve on-chain ownership structure, call transparency, and value sharing. The vision of Sentient is to enable anyone to build, collaborate, own, and monetize AI products, thus fostering a fair and open AI Agent network ecosystem.
![Biteye and PANews jointly released an AI Layer1 research report: Finding fertile ground for on-chain DeAI])https://img-cdn.gateio.im/webp-social/moments-f4a64f13105f67371db1a93a52948756.webp(
The Sentient Foundation team brings together top academic experts, blockchain entrepreneurs, and engineers from around the world, dedicated to building a community-driven, open-source, and verifiable AGI platform. Core members include Princeton University professor Pramod Viswanath and Indian Institute of Science professor Himanshu Tyagi, who are responsible for AI safety and privacy protection, while Polygon co-founder Sandeep Nailwal leads the blockchain strategy and ecosystem layout. Team members come from well-known companies such as Meta, Coinbase, and Polygon, as well as top universities like Princeton University and the Indian Institute of Technology, covering fields such as AI/ML, NLP, and computer vision, working together to drive the project's implementation.
As a second entrepreneurial project of Polygon co-founder Sandeep Nailwal, Sentient was born with a halo, possessing rich resources, connections, and market recognition, providing strong endorsement for the project's development. In mid-2024, Sentient completed a $85 million seed round financing, led by Founders Fund, Pantera, and Framework Ventures, with other investment institutions including Delphi, Hashkey, and dozens of well-known VCs such as Spartan.
) Design Architecture and Application Layer
Infrastructure Layer
Core Architecture
The core architecture of Sentient consists of two parts: AI Pipeline and on-chain system.
The AI pipeline is the foundation for developing and training "Loyal AI" artifacts, consisting of two core processes:
The blockchain system provides transparency and decentralized control for the protocol, ensuring the ownership, usage tracking, revenue distribution, and fair governance of AI artifacts. The specific architecture is divided into four layers:
![Biteye and PANews jointly released AI Layer1 research report: Finding fertile ground for on-chain DeAI]###https://img-cdn.gateio.im/webp-social/moments-a70b0aca9250ab65193d0094fa9b5641.webp(
)## OML Model Framework
The OML framework (Open, Monetizable, Loyal) is a core concept proposed by Sentient, aimed at providing clear ownership protection and economic incentives for open-source AI models. By combining on-chain technology and AI-native cryptography, it has the following features:
AI-native Cryptography
AI native encryption utilizes the continuity of AI models, low-dimensional manifold structures, and the differentiable characteristics of models to develop a lightweight security mechanism that is "verifiable but non-removable." Its core technology is:
This method enables "behavior-based authorization calls + ownership verification" without the cost of re-encryption.
![Biteye and PANews jointly released AI Layer1 research report: Finding fertile ground for on-chain DeAI]###https://img-cdn.gateio.im/webp-social/moments-cf5f43c63b7ab154e2201c8d3531be8c.webp(
)## Model Confirmation and Secure Execution Framework
Sentient currently adopts Melange mixed security: combining fingerprint verification, TEE execution, and on-chain contract profit sharing. Among them, the fingerprint method is implemented through OML 1.0 as the main line, emphasizing the "Optimistic Security" concept, which assumes compliance by default and allows for detection and punishment after violations.
The fingerprint mechanism is a key implementation of OML, which generates a unique signature during the training phase by embedding specific "question-answer" pairs. With these signatures, the model owner can verify ownership and prevent unauthorized copying and commercialization. This mechanism not only protects the rights of model developers but also provides a traceable on-chain record of the model's usage behavior.
In addition, Sentient has launched the Enclave TEE computing framework, utilizing trusted execution.