Open-source AI local build service
Reference:
Condition: New product
03/09/2026
03/18/2026
Brand: CA
By buying this product you can collect up to 42 loyalty points. Your cart will total 42 points that can be converted into a voucher of $ 210.00.
More info
Open-Source AI Local Build Service
Zero Data Leakage
All computation and temporary storage are within the company's internal GPU servers, never handled by third parties, complying with strict confidentiality regulations.
Dedicated Local AI Server
The platform can connect to multiple local GPU servers, with linearly scalable computing power.
Internal Exclusive Platform
Preloaded with Open WebUI / LM Studio style portal, customizable dedicated domain.
Up to 120B Class Models
Supports open-source models up to 120B (120 billion) parameters
Up to 200K Context Length
Can process approximately 200,000 Chinese characters of ultra-long documents, code, reports, meeting deep analysis and long conversation scenarios.
Freely Switch/Connect Multiple Models
One-click switch between open-source LLM/VLM in the management interface, connect multiple model backends simultaneously for output comparison.
* Recommended sweet spot (approx. 80B, 128k context length)
* 80B model averages about 35 tokens/sec, suitable for groups under 10 people.
Package 1
- Dedicated AI platform with 2nd-level domain
- 1× Blackwell G10 128GB unified memory
- Supports open-source LLM / VLM 80B and below
- Computing power reaches 2 hours/person/day @~30token/s
- 1-year local hosting (4 vCPU · 16GB RAM · 100GB · No GPU)
Package 2
- Dedicated AI platform with 2nd-level domain
- 2× Blackwell G10 128GB + stacking cable
- Supports open-source LLM / VLM below 80B (Dual-card acceleration)
- Computing power reaches 2 hours/person/day @~30token/s
- 1-year local hosting (4 vCPU · 16GB RAM · 100GB · No GPU)