Whitepaper v1.0

WWWIII: The People's AI

A technical and economic blueprint for building the first publicly funded, publicly governed large language model.
Version 1.0 Date February 2026 License CC BY 4.0

Table of Contents

  1. Abstract
  2. The Problem
  3. The WWWIII Solution
  4. Token Economics
  5. Governance Model
  6. Technical Architecture
  7. Training Plan
  8. Security & Auditing
  9. Roadmap & Milestones
  10. Risk Factors

01Abstract

WWWIII is a decentralized initiative to fund, build, and release the first large language model that is publicly funded, publicly developed, and publicly governed. Through an ERC-20 token on Ethereum, WWWIII creates a global coordination mechanism that allows anyone to contribute capital, compute, or expertise toward training a frontier AI model.

Unlike corporate-backed open-source releases — where a single company controls architecture, data, and release schedules — WWWIII places every decision in the hands of token holders. From architecture selection to dataset curation to compute allocation, governance is on-chain and transparent.

The resulting model will be released under Apache 2.0 with fully open weights, open training code, and publicly documented training runs. No strings attached. No corporate gatekeepers. The people's model.

1B
Token Supply
70B+
Parameter Target
100%
Open Source
0
Corporate Owners

02The Problem

The Concentration of AI Power

As of early 2026, fewer than ten organizations on Earth have successfully trained a frontier large language model. The barriers are immense: $100M+ in compute costs, access to thousands of high-end GPUs, teams of hundreds of specialized researchers, and massive proprietary datasets. The result is an oligopoly over the most transformative technology since electricity.

OpenAI, Anthropic, Google DeepMind, Meta AI, xAI, and a handful of Chinese labs control the trajectory of artificial intelligence. Their models power search engines, write code, generate media, and increasingly make decisions that affect billions. Yet none of these organizations answer to the public.

The Open-Source Illusion

Models like LLaMA, Mistral, and DeepSeek have demonstrated that open-weight models can match or rival closed systems. This is a genuine breakthrough. But "open weights" is not the same as "open development."

In every case, a single entity makes unilateral decisions about the most powerful technology in existence. Open weights are a gift. WWWIII proposes something fundamentally different: open ownership.

The Funding Gap

The cost of training frontier models has grown exponentially. GPT-4 reportedly cost over $100 million to train. Next-generation models may cost $1 billion or more. No crowdfunding platform, university, or nonprofit can match this scale. Traditional fundraising mechanisms are too slow, too limited in reach, and offer no governance to contributors.

Cryptocurrency solves this: global, permissionless, programmable capital that can be raised from anyone, anywhere, with governance built into the protocol.

03The WWWIII Solution

WWWIII is a token-funded, community-governed project to train and release a frontier large language model. The core thesis is simple:

If millions of people each contribute a small amount, we can collectively fund what only billion-dollar corporations can fund alone — and we can do it transparently, with shared governance and shared ownership of the result.

How It Works

  1. Fund — Contributors purchase or earn $WWWIII tokens. Capital from the Development Fund is converted to compute and researcher grants.
  2. Govern — Token holders vote on every major decision: model architecture, training data sources, compute providers, safety frameworks, and release strategy.
  3. Build — A distributed team of researchers, engineers, and contributors execute the community's decisions. All work is done in public. All code is open source.
  4. Release — The trained model is released under Apache 2.0 with fully open weights. Anyone can use, fine-tune, or deploy it. No restrictions. No API keys. No corporate approval.

Why This Works Now

04Token Economics

The $WWWIII token is an ERC-20 on Ethereum with a fixed supply of 1,000,000,000 tokens. There is no mint function after deployment. Supply is immutable.

AllocationPercentageTokensPurpose
Development Fund40%400,000,000Compute, training, infrastructure, researcher grants
Community30%300,000,000Airdrops, contributor rewards, governance incentives
Team & Advisors15%150,000,000Core team compensation (2-year vest, 6-month cliff)
Liquidity10%100,000,000DEX pools, market making, exchange listings
Reserve5%50,000,000Emergency fund, partnerships, unforeseen needs

Token Utility

The $WWWIII token is not speculative — it is a coordination and governance mechanism:

Deflationary Mechanism

The contract includes an optional burn() function. Any holder can permanently destroy their tokens, reducing total supply. The community may vote to implement periodic burns from API revenue or other mechanisms to create deflationary pressure over time.

Vesting Schedule

Team and advisor tokens (15%) are subject to a 2-year vesting period with a 6-month cliff. No team tokens can be accessed for the first 6 months. After the cliff, tokens vest linearly over the remaining 18 months. This ensures the team is aligned with long-term project success.

05Governance Model

WWWIII operates as a Decentralized Autonomous Organization (DAO) where token holders govern all major decisions. Governance is designed to be transparent, inclusive, and resistant to capture by any single entity.

Decision Categories

CategoryExamplesQuorum
ArchitectureModel size, attention mechanism, context length10% of supply
DataTraining datasets, filtering criteria, language mix10% of supply
ComputeGPU provider selection, budget allocation15% of supply
TreasuryGrant disbursement, partnership funding20% of supply
SafetyAlignment approach, red-teaming, release gates15% of supply
ProtocolToken mechanics, governance rules changes25% of supply

Proposal Process

  1. Discussion — Anyone can post a proposal to the governance forum for community feedback (7 days minimum)
  2. Formal Proposal — Proposals that gain sufficient support are submitted on-chain via Snapshot or Tally
  3. Voting Period — 5-day voting window. 1 token = 1 vote. Simple majority wins unless otherwise specified
  4. Execution — Approved proposals are executed by the core team or automatically via smart contract

Advisory Council

A rotating advisory council of 7 members — elected by token holders every 6 months — provides technical guidance on architecture and training decisions. Council members are compensated from the Community allocation and serve as subject-matter experts, not decision-makers. All final decisions rest with token holders.

06Technical Architecture

The initial model target is a decoder-only transformer with 70B+ parameters, trained on a curated multilingual dataset. Final architecture decisions will be voted on by token holders, but the following serves as the baseline proposal.

Baseline Model Specification

ParameterTarget
ArchitectureDecoder-only transformer (GPT-style)
Parameters70B (initial), 200B+ (stretch goal)
Context Length128K tokens
Vocabulary128K tokens (BPE, multilingual)
AttentionGrouped Query Attention (GQA)
Position EncodingRoPE (Rotary Position Embeddings)
NormalizationRMSNorm (pre-normalization)
ActivationSwiGLU
PrecisionBF16 training, INT8/INT4 quantized inference
Training FrameworkMegatron-LM + DeepSpeed ZeRO Stage 3

Training Data

All training data will be sourced from publicly available, ethically curated datasets. The community will vote on inclusion criteria. Proposed sources:

Total training corpus target: 15+ trillion tokens after deduplication and quality filtering.

Compute Requirements

Training a 70B parameter model on 15T tokens requires approximately 500,000 GPU-hours on H100s (or equivalent). Compute will be sourced through a hybrid approach:

07Training Plan

Training will be conducted in public with full transparency. Every metric, every decision, and every checkpoint will be accessible to the community.

Pre-Training

Post-Training Alignment

Transparency Commitments

  1. All training code open-sourced on GitHub from day one
  2. Real-time training dashboard accessible to all token holders
  3. Intermediate checkpoints released as open weights
  4. All compute invoices and treasury spending published on-chain
  5. Weekly public updates from the core research team
  6. Monthly community calls with live Q&A

08Security & Auditing

Smart Contract Security

Operational Security

AI Safety

WWWIII takes AI safety seriously. The governance structure ensures that safety decisions are made collectively, not by a single company:

09Roadmap & Milestones

PhaseTimelineKey Milestones
1 — Foundation Q1-Q2 2026 Token launch, community building, DAO framework, whitepaper publication, Uniswap listing
2 — Architecture Q3-Q4 2026 Hire core research team, architecture vote, data pipeline, compute partnerships, contributor program
3 — Training Q1-Q3 2027 Pre-training run, live dashboard, intermediate checkpoints, RLHF alignment, safety evaluation
4 — Release Q4 2027 Full model release (Apache 2.0), open API, fine-tuning grants, next-gen planning

10Risk Factors

WWWIII is an ambitious project and participants should understand the risks involved:

Despite these risks, WWWIII represents a fundamentally new approach to AI development — one that prioritizes transparency, collective ownership, and public benefit over corporate profit. The risks are real, but so is the opportunity to change how the world's most powerful technology is built.

The best way to predict the future is to fund it, govern it, and build it together.