Planck₁

Overview

Planck₁ is our compute-native Layer-1 blockchain — the execution environment of the Planck ecosystem. It is where all real AI workloads are processed: companies rent GPUs here, inference is executed, and models are trained and fine-tuned. Planck₁ is the backbone of both our AI Cloud and AI Studio, providing the environment that enterprises and developers directly interact with.

Vision

The vision of Planck₁ is to deliver enterprise-grade GPU compute through a decentralized, tokenized network that feels as seamless as a traditional cloud — but at up to 90% lower cost. Instead of compute being locked inside centralized data centers owned by a few hyperscalers, Planck₁ democratizes access and turns GPU power into an open, programmable resource.

Architecture

Component
Description

Execution Environment

Planck₁ handles scheduling, allocation, and execution of GPU workloads across a distributed network of Tier-3 and Tier-4 data centers.

Native GPU Layer

Workloads run directly on enterprise GPUs such as H100s, H200s, and B200s, with integrated scheduling and Proof-of-Delivery validation.

Inference & Training

Supports both real-time inference and full model training/fine-tuning workloads, natively integrated into smart contracts and APIs.

Payments & Settlements

$PLANCK is the native token used for compute payments, with support for fiat (e.g. USDC) payments routed through the chain.

Core Capabilities

  • Enterprise-Grade Compute: Planck₁ enables direct access to the same GPUs that power frontier AI models, but without the lock-in or inflated prices of AWS or Azure.

  • Integrated Proof-of-Delivery: Every completed job is cryptographically verified and tied to real usage, ensuring reliability and transparency.

  • Cloud-Native Experience: Enterprises get the tooling they expect — VM creation, load balancing, logging, role-based access — but in a decentralized environment.

  • AI Studio & AI Cloud: Planck₁ underpins our low-code AI Studio and our decentralized GPU Cloud, making it possible to launch, fine-tune, and deploy models quickly.

Relationship with Planck₀

Planck₁ sits directly on top of Planck₀. While Planck₁ is the execution layer for AI workloads, Planck₀ is the coordination layer beneath it. Planck₀ provides shared security, token interoperability, and compute interoperability across all AI-native chains in the ecosystem. This means if one chain has unused GPU capacity, another can tap into it instantly.

This architecture is fundamentally different from centralized clouds like AWS or Azure, which keep compute siloed inside their own systems. In Planck’s model, compute is interoperable, on-demand, and tokenized. Whether you are running workloads on Planck₁ or on a future AI chain launched on Planck₀, you are always part of a shared ecosystem where resources flow freely.

Token Utility

  • Payments for Compute: $PLANCK is used to pay for GPU inference, training, and cloud workloads.

  • Staking & Rewards: GPU providers and validators stake $PLANCK and are rewarded for uptime and successful compute delivery.

  • Buybacks & Sustainability: Revenue from cloud usage drives structured buybacks of $PLANCK, reinforcing long-term sustainability.

Why Planck₁ Matters

Planck₁ provides decentralized yet enterprise-grade compute infrastructure — the execution layer that makes AI workloads scalable, secure, and affordable. By combining this with Planck₀’s interoperability, the Planck stack enables both high-performance execution and seamless resource sharing across the ecosystem. Together, they form the world’s first AI-native Layer-0/Layer-1 stack, turning decentralized infrastructure into a true enterprise alternative to centralized hyperscalers.

Last updated

Was this helpful?