OpenTMP LLM aims to train and run models across distributed devices and users, making Physical AI efficient and private. Private Efficient Governable
CPU / GPU / NPU Compatible (RISC-V) · Edge Inference & P2P Training
On‑Device Training & Encrypted Inference.
Edge Optimization, Distillation & Quantization.
MPC Governance & Shared Ownership.
Break Data Silos and Unlock Collaborative LLM Without Data Exposure. Institutions Co‑Build Shared Model Capabilities That Meet Security Requirement While Optimizing Performance and Cost in a Distributed Environment.
Robots learn locally and share experience securely.
On-Device Data Collection + Remote Training & Inference.
User data remains private with authorized, auditable access.
Fair collaboration ensured by MPC.
A unified distributed LLM framework for edge training, private inference, and governable collaboration.
Multi‑node coordination; elastic orchestration.
Encrypted compute; zero‑trust collaboration.
Parameter ownership & authorization.
Incremental learning locally; encrypted experience sharing across devices.
Private inference inside enterprises or devices with auditable access.
Risk, AML, and credit scoring with measurable contribution & governance.
Co-building collaborative AI with robotics and edge partners, e.g., RobinX and TAI Phone (examples).
Note: Papers are representative references; implementation follows the engineering of OpenTMP LLM.
Bring collaborative intelligence to your chain, robotics, or agent network.
Prefer email for security-sensitive or partnership inquiries.