OpenTMP LLM

Collaborative LLM Framework: Data Stays Local with Efficient Distributed Physical AI.

OpenTMP LLM aims to train and run models across distributed devices and users, making Physical AI efficient and private. Private Efficient Governable

Data Remains On‑Prem On-Device Architect MPC LLM Edge Acceleration Model Marketplace Governance
OpenTMP LLM Engine

Distributed • Private • Governable

CPU / GPU / NPU Compatible (RISC-V) · Edge Inference & P2P Training

Private LLM

On‑Device Training & Encrypted Inference.

Efficient LLM

Edge Optimization, Distillation & Quantization.

Governable LLM

MPC Governance & Shared Ownership.

Why OpenTMP LLM?

Break Data Silos and Unlock Collaborative LLM Without Data Exposure. Institutions Co‑Build Shared Model Capabilities That Meet Security Requirement While Optimizing Performance and Cost in a Distributed Environment.

  • • Data Stays Local; Sensitive Information Never Leaves the Domain
  • • End-to-End: On-Device Inference
  • • Multi-Party Governance; Auditable Parameters and Access
  • • Works Across Heterogeneous Hardware for Edge Scenarios
Robotics
Edge Distributed Post-Training

Robots learn locally and share experience securely.

Automobile
End-to-End: On-Device Data Collection + Local Inference

On-Device Data Collection + Remote Training & Inference.

AI Agent
Private Inference

User data remains private with authorized, auditable access.

Governance
Shared Model Ownership

Fair collaboration ensured by MPC.

Core Features

Private LLM

  • • On‑device training & private inference (SMPC/private execution)
  • • Zero data egress with minimized visibility
  • • Flexible to regulatory & audit requirements

Efficient LLM

  • • Edge acceleration: distillation & quantization
  • • Works on CPU / GPU / NPU (RISC-V)
  • • P2P joint training to reduce cost

Governable LLM

  • • MPC‑based governance
  • • Shared ownership over models & parameters
  • • Authorized access with traceable audit

OpenTMP LLM Engine

A unified distributed LLM framework for edge training, private inference, and governable collaboration.

  • • Distributed Edge AI / P2P Training
  • • Secure Private Inference (SMPC)
  • • Distributed Model Governance & Ownership
  • • Seamless Hardware Integration (RISC-V CPU/GPU/NPU)
Distributed

Multi‑node coordination; elastic orchestration.

Private

Encrypted compute; zero‑trust collaboration.

Governable

Parameter ownership & authorization.

Engine Diagram Placeholder

Use Cases

Robotics

Edge Post‑Training & Inference

Incremental learning locally; encrypted experience sharing across devices.

AI Agent

Private User Data

Private inference inside enterprises or devices with auditable access.

FinTech

Cross‑Institution Training

Risk, AML, and credit scoring with measurable contribution & governance.

Ecosystem & Partnerships

Co-building collaborative AI with robotics and edge partners, e.g., RobinX and TAI Phone (examples).

Robotics

RobinX × State Labs

  • • Labeled data & edge post-training
  • • Sensitive data protection
  • • Edge-optimized VLA
Edge Device

TAI Phone × State Labs

  • • Secure data on trusted hardware
  • • On-device private inference
  • • Distributed post-training

Selected Papers

SMPC
  • ASIACRYPT’21 Efficient Threshold ECDSA (Class Groups)
  • CCS’21 Online‑friendly Two‑Party ECDSA
  • ESORICS MPC‑in‑Multi‑Heads (multi‑prover ZK)
ZKML
  • CCS zkCNN (ZK proofs of CNN inference & accuracy)
  • USENIX Security Mystique (faster ZK transforms for ML)

Note: Papers are representative references; implementation follows the engineering of OpenTMP LLM.

Collaborate With Us

Bring collaborative intelligence to your chain, robotics, or agent network.

  • Cross-Organization Training & Encrypted Inference
  • Integration With Edge Devices and Federated Partners
  • PoC & Deployment Support for Institutions

Prefer email for security-sensitive or partnership inquiries.