Network Nodes
How it works?
The selection process varies based on the specific computational requirements:
Micro Resources - Applications for mining that run on popular operating systems (Windows, MACOS, Linux) and support popular GPU chips with additional computing resources are available for consumer devices (desktops, laptops, etc.).
Data Center Solutions - Rather than installing mining apps on each control center, custom mining scripts and Docker images may be created based on the demands of the data center and scaled on all data center servers.
Staking
The staking mechanism is a critical component of DePINs, incentivizing good behavior and deterring misconduct. By requiring node operators to stake $PLANCK, it ensures alignment with platform goals and promotes high-quality service delivery. Violations of quality control standards or disruptive behavior can result in stake reductions. Node operators are rewarded through two primary mechanisms:
Availability - Nodes are rewarded, such as through the PoC award, for maintaining high availability and offering standby services.
Processing - Additional incentives, such as PoD and Service Fee rewards, are provided for the active utilization of compute resources by end-users.
Planck Mining Application
The Planck mining application is an example of a standardized computing node. Once purchased, owners can contribute their consumer-grade computing power to the Planck network in exchange for rewards.
Traditionally, enterprise-grade compute infrastructure has been incompatible with consumer-grade devices due to their diverse nature. Planck addresses this challenge by introducing a standardized, high-performance compute node. This enables consumer resource owners to contribute their devices to the network, empowering them to participate in enterprise-level AI workloads. Planck pioneers the standardization of consumer computing infrastructure, making it accessible for AI applications.
Verifier Nodes
The Verifier is responsible for ensuring the performance and integrity of the network's compute nodes. It achieves this by conducting tests at critical stages of a Compute Node's lifecycle.
Verifying schedule
It verifies at three key stages:
Installation - Before registering on the Planck network, Compute Nodes must undergo a verification process to confirm their specifications. Successful verification leads to registration on the network.
Availability verification - To ensure availability, standby compute nodes undergo random checks. The results of these checks influence the Indexer's scheduling decisions and the compute node's priority.
Processing verification - Service data is collected and analyzed to assess actual service performance. Based on these findings, penalties for subpar service quality may be imposed.
Verification ways
Methods the Verifier conducts its tests:
Specification - reading the specifications of the Compute Node.
Noise data process - Acting as a compute buyer to monitor interactions and Noise data processing, ensuring compliance with specified criteria.
Proof of Capacity (Availability)
Recognizing and rewarding node operator availability, even during inactive periods, is crucial. Planck ensures a baseline level of computational resources, even during peak demand, by identifying and incentivizing node operator availability. Verifiers conduct random Proof of Capacity tests to verify this availability.
Proof of Delivery (Processing)
To ensure quality, compute node performance is regularly monitored. Verifiers verify that service requests are fulfilled according to Planck's quality standards. This essential service directly impacts resource owners' rewards, fees, and future scheduling opportunities. Non-compliance can result in penalties or reduced stake.
Indexer
The Indexer matches clients with suitable Compute Nodes based on their specific needs. For AI use cases, the primary objectives include delivering ready-to-deploy AI models and executing machine learning tasks such as batch inference and fine-tuning.
Randomization
To maintain decentralization, an Indexer is randomly selected for each service request when providing AI services. This approach minimizes signaling delays caused by protocol complexity and reduces the potential for fraudulent activities.
Compute Getter
Indexers consider factors such as Compute Node status, availability, latency, requirements, and service charges when matching them with service requests. The final selection is based on a combination of the network's overall ranking, the lowest service charge, and the highest level of experience.
Last updated