OpenTMP LLM

Collaborative LLM Framework: Data stays local with encrypted training & inference.

OpenTMP LLM orchestrates multi‑party data and compute via MPC‑FL, enabling cross‑organization training and private inferencePrivate Efficient Governable collaborative AI.

Data remains on‑prem Federated / multi‑party MPC / threshold signing Edge acceleration Auditable governance
OpenTMP LLM Engine

Distributed • Private • Governable

CPU / GPU / NPU compatible · Edge inference & P2P training

Private LLM

On‑device training & encrypted inference.

Efficient LLM

Edge optimization, distillation & quantization.

Governable LLM

MPC governance & shared ownership.

Why OpenTMP LLM?

Break data silos and unlock collaboration without data exposure. Institutions co‑build shared model capabilities that meet compliance while optimizing performance and cost.

  • • Data stays local; sensitive information never leaves the domain
  • • End‑to‑end: federated training + encrypted inference
  • • Multi‑party governance; auditable parameters and access
  • • Works across heterogeneous hardware for edge scenarios
FinTech
Cross‑institution training

Risk/AML/credit scoring with data retained by each institution.

Robotics
Edge distributed post‑training

Robots learn locally and share experience securely.

AI Agent
Private inference

User data remains private with authorized, auditable access.

Governance
Shared model ownership

Fair collaboration ensured by MPC/TSS.

Core features

Private LLM

  • • On‑device training & private inference (SMPC/private execution)
  • • Zero data egress with minimized visibility
  • • Flexible to regulatory & audit requirements

Efficient LLM

  • • Edge acceleration: distillation & quantization
  • • Works on CPU / GPU / NPU
  • • P2P cooperative training to reduce cost

Governable LLM

  • • MPC‑based governance & threshold signing
  • • Shared ownership over models & parameters
  • • Authorized access with traceable audit

OpenTMP LLM Engine

A unified distributed LLM framework for edge training, private inference, and governable collaboration.

  • • Distributed Edge AI / P2P Training
  • • Secure Private Inference (SMPC)
  • • Distributed Model Governance & Ownership
  • • Seamless Hardware Integration (CPU/GPU/NPU)
Distributed

Multi‑node coordination; elastic orchestration.

Private

Encrypted compute; zero‑trust collaboration.

Governable

Parameter ownership & authorization.

Engine Diagram Placeholder

Use cases

FinTech

Cross‑institution training

Risk, AML, and credit scoring with measurable contribution & governance.

Robotics

Edge post‑training & inference

Incremental learning locally; encrypted experience sharing across devices.

AI Agent

Private user data

Private inference inside enterprises or devices with auditable access.

Ecosystem & partnerships

Co-building collaborative AI with robotics and edge partners, e.g., RobinX and TAI Phone (examples).

Robotics

RobinX × State Labs

  • • Labeled data & edge post-training
  • • Sensitive data protection
  • • Edge-optimized VLA
Edge Device

TAI Phone × State Labs

  • • Secure data on trusted hardware
  • • On-device private inference
  • • Distributed post-training

Selected papers

SMPC
  • ASIACRYPT’21 Efficient Threshold ECDSA (Class Groups)
  • CCS’21 Online‑friendly Two‑Party ECDSA
  • ESORICS MPC‑in‑Multi‑Heads (multi‑prover ZK)
ZKML
  • CCS zkCNN (ZK proofs of CNN inference & accuracy)
  • USENIX Security Mystique (faster ZK transforms for ML)

Note: Papers are representative references; implementation follows the engineering of OpenTMP LLM.

Collaborate with us

Bring collaborative intelligence to your chain, robotics, or agent network.

  • Cross-organization training & encrypted inference
  • Integration with edge devices and federated partners
  • PoC & deployment support for institutions

Prefer email for security-sensitive or partnership inquiries.