This document supports the business plan appendix. Diagrams describe a target-state reference architecture. Pricing is illustrative until vendor and tax positions are finalized.
HTTPS / APIs / Console
Underlying: in-country data centers or compliant local partner infrastructure (multi-AZ within Vietnam).
| Layer | Components | Role |
|---|---|---|
| Integration | Web console, SDKs/CLI, REST & event APIs | Customer access & automation |
| Control plane | IAM, billing/quotas, policy & compliance configuration | Governance & commercial metering |
| Data plane | Distributed object storage, snapshots/versioning, in-VN replication | Durable, scalable data services |
| Security plane | Edge protections (WAF/DDoS as applicable), IDS, hardening baselines | Detective & preventive controls |
| AI, automation & compute | Managed LLM endpoints (e.g. Azure OpenAI, GPT-4o/4.1), RAG pipelines, OCR/Form Recognizer, analytics sandboxes (Python/AutoML), RPA connectors (UiPath, Power Automate), speech (Whisper, Azure Speech), recommendation services, tenant GPU instances (configurable vCPU, RAM, disk, GPU SKU) | ERP intelligence, document capture, forecasting, workflows, voice ERP, GPU rental for ML |
Below maps common ERP AI patterns to technology options. Actual stack is selected per tenant (latency, cost, data sovereignty). RAG and knowledge services connect to approved ERP APIs, warehouses, and document stores inside the residency boundary where possible.
| # | Capability | Primary use cases | Technology options | Architecture notes |
|---|---|---|---|---|
| 1 | Internal AI chatbot (ERP assistant) | Natural-language data queries (“this month’s revenue?”), procedure help, instant report Q&A | OpenAI GPT-4o / GPT-4.1; Azure OpenAI Service; RAG over ERP DB views, cubes, and curated APIs | Orchestration layer enforces RBAC: queries execute as the user’s ERP identity; audit log of questions & tool calls. |
| 2 | AI OCR & document processing | Invoice → bookkeeping lines; automated data entry from scans/PDFs | Google Vision API; Azure AI Document Intelligence / Form Recognizer; custom models on tenant GPUs | Pipeline: ingest → classify → extract → validation rules → ERP posting API; human-in-the-loop queue for low confidence. |
| 3 | AI data analytics (AI BI) | Revenue & inventory forecasting; anomaly detection (e.g. expense spikes); financial commentary | Python + ML (scikit-learn); Power BI + AI features; AutoML (Google Vertex / Azure ML) | Feature store or warehouse connection; scheduled jobs on CPU or GPU nodes; exports to dashboards and alerts. |
| 4 | AI-driven process automation (AI + RPA) | Order generation, approval routing, intelligent workflows | UiPath (AI Center, Document Understanding); Microsoft Power Automate + AI Builder | Event bus from ERP ↔ automation runners; secrets in vault; IDS monitors bot service accounts. |
| 5 | AI voice (voice-enabled ERP) | Hands-free KPI answers (“today’s revenue?”); mobile & field scenarios | Whisper (STT); Azure Speech (STT/TTS); LLM backend as in (1) | Low-latency path: edge device → STT → intent → secured data API → TTS response; PII minimization in audio retention. |
| 6 | AI personalization (recommendation) | Purchase suggestions, pricing hints, lead/customer recommendations | Collaborative filtering + gradient boosting; embeddings from product/customer history; optional deep models on GPU | Offline training on GPU rental; online serving API with A/B flags and explainability hooks. |
| 7 | AI for support | Internal L1/L2 bot; ticketing integration; troubleshooting (“why can’t I run AR report?”) | Same LLM stack as (1); integration with ITSM/ticket APIs; runbooks as RAG sources | Escalation to human with full transcript; link to known-error DB. |
| 8 | AI knowledge base | Centralized ERP manuals, implementation guides, tribal knowledge | Document ingestion, chunking, embeddings, vector DB; optional multilingual models | Versioning tied to ERP release; access aligned to module licensing. |
| 9 | AI help (contextual process guidance) | Step-by-step “create warehouse receipt” with deep links to screens | LLM + structured procedure graph; role-aware (accountant vs warehouse); UI context from client shell | Differentiator: combines RBAC, current module/screen, and org policies — not static PDFs only. |
Mutual TLS / OAuth2 / ERP session propagation
Schedulers, queues, object storage (section 2)
Customers choose GPU type, VRAM tier, vCPU, RAM, storage, and duration (hourly, monthly reserved). Suitable for training, fine-tuning, batch inference, and self-hosted models.
SKUs align to NVIDIA-class accelerators commonly requested for ML. Exact chip generation depends on datacenter supply; VRAM columns reflect customer-facing usable VRAM tiers (8, 16, 24, 32, 40, 48, 80 GB). H100 offered as flagship training/inference tier (typically 80 GB).
| Tier | Typical GPU families (examples) | VRAM focus | Typical workloads |
|---|---|---|---|
| Entry | T4-class, L4-class | 8–16 GB | Light inference, dev/test, small CV/NLP |
| Professional | RTX / A-series workstation class | 24 GB | Fine-tuning small LLMs, batch OCR post-processing |
| High-memory | A30 / L40S-class (examples) | 32–48 GB | Larger models, multi-GPU optional |
| Datacenter | A100-class | 40 / 80 GB | Training, LLM serving at scale |
| Flagship | NVIDIA H100 | 80 GB | Frontier training, large-scale inference, HPC-style jobs |
Add vCPU and system RAM in standard blocks (e.g. 8–32 vCPU, 32–256 GB RAM) and network-attached or local NVMe volumes per performance tier.
Price type: These numbers are indicative B2B list / retail prices to end customers (what you charge businesses), not Goodland’s wholesale cost from AWS or partners, and not distributor buy prices unless you publish a separate channel list. For COGS, hyperscaler wholesale, allocated 24/7 support, Vietnam overhead, and competitor list benchmarks, use Goodland-Cloud-Unit-Economics-Benchmarking (HTML/PDF).
VAT may apply. Annual prepay typically 15% below 12× monthly on Starter–Business tiers. Enterprise is custom.
| Tier | Target segment | Included storage | Backup | IDS / monitoring | Support | Indicative monthly (VND) |
|---|---|---|---|---|---|---|
| Starter | Small SME, pilots | 500 GB | Daily, 14-day retention | Basic alerting | Business hours, ticket | 2,500,000 – 4,000,000 |
| Growth | Growing SME, startups | 2 TB | Daily + weekly, 30-day | Managed IDS bundle, monthly report | Extended + chat | 8,000,000 – 12,000,000 |
| Business | Mid-market | 10 TB | Policy-based RPO/RTO options | Dedicated correlation, SOC integration option | 24/7, optional CSM | 28,000,000 – 45,000,000 |
| Enterprise | Regulated / large orgs | Custom (50 TB+) | In-VN geo-redundancy, legal hold | Custom playbooks, IR retainer | 24/7 + on-site (major cities) | Custom (from ~80,000,000 + commit) |
| Add-on | Notes | Typical range (VND / month) |
|---|---|---|
| Extra storage | Per TB, volume discounts | 400,000 – 900,000 / TB |
| Extended retention | e.g. 90d → 365d | +15–35% on backup component |
| API & integration pack | Higher limits, dedicated endpoints | 1,500,000 – 5,000,000 |
| Premium SLA | e.g. 99.9% target | +20–40% platform fee |
Bundles assume Goodland-hosted orchestration + metering. Third-party model usage (OpenAI, Azure OpenAI, Google Cloud) is often passed through at cost + margin or requires customer bring-your-own-key (BYOK). Numbers are order-of-magnitude for planning.
| Package | What’s included | Indicative monthly (VND) | Notes |
|---|---|---|---|
| AI ERP Lite | Internal assistant + knowledge base (1), (8); up to ~5k RAG queries/mo; 1 connector | 6,000,000 – 12,000,000 | Add seats or queries ala carte |
| AI ERP Standard | Lite + OCR pipeline (2) up to 2k pages/mo + support bot (7) | 15,000,000 – 28,000,000 | Extra pages 800 – 2,500 VND/page by volume |
| AI ERP Plus | Standard + voice channel (5) + workflow hooks for RPA (4) — 2 flows | 28,000,000 – 48,000,000 | STT/TTS minutes billed separately (below) |
| AI Analytics add-on | Forecasting & anomaly jobs (3); 1 dashboard workspace; scheduled scoring | 8,000,000 – 22,000,000 | Excludes Power BI / cloud AutoML licenses if customer-owned |
| Recommendations add-on | Reco API (6); retrain monthly; A/B hooks | 5,000,000 – 15,000,000 | Heavy training uses GPU hours (section 11) |
| Contextual AI Help | Role-aware guidance (9); procedure graph build + 40h professional services once | 4,000,000 – 10,000,000 / mo + one-time 40,000,000 – 120,000,000 | PS for screen-map & content curation |
| Meter | Unit | Typical range (VND) |
|---|---|---|
| LLM / RAG query (after bundle) | per 1,000 calls | 600,000 – 2,500,000 |
| STT (Whisper / Azure Speech) | per audio hour | 180,000 – 550,000 |
| TTS | per 1M characters | 400,000 – 1,200,000 |
| OCR / document AI | per page | 800 – 2,500 |
| RPA production bot | per bot / month | 3,000,000 – 9,000,000 |
Linux VM with selected GPU. vCPU and RAM priced additively (examples: 2,000 – 8,000 VND / vCPU-hour; 1,500 – 5,000 VND / GB-RAM-hour). Disk: ~400 – 1,200 VND / GB-month (NVMe premium). Committed use: 1-month commit ~8–12% off; 12-month ~20–35% off headline hourly. Spot / interruptible (if offered): ~40–70% below on-demand. VAT may apply.
| GPU tier | VRAM (customer-facing) | Indicative VND / GPU-hour | Comment |
|---|---|---|---|
| Entry (e.g. T4 / L4 class) | 8 GB | 12,000 – 28,000 | Dev / light inference |
| Entry+ | 16 GB | 18,000 – 38,000 | Small fine-tunes |
| Workstation-class | 24 GB | 35,000 – 75,000 | Mid-size models |
| High-memory | 32 GB | 55,000 – 110,000 | Training / batch |
| High-memory | 40–48 GB | 85,000 – 190,000 | Larger batches, multi-worker |
| Datacenter (A100-class) | 40 GB | 150,000 – 320,000 | Production training |
| Datacenter (A100-class) | 80 GB | 220,000 – 450,000 | Large-model training |
| Flagship (H100-class) | 80 GB | 480,000 – 980,000 | Frontier training & heavy inference |
Packaging: Publish bundled SKUs (GPU + fixed vCPU/RAM/disk) for common sizes to simplify quoting; use component meters above for custom builds.
Goodland Cloud — Architecture & Pricing appendix (core platform, ERP AI catalog, GPU rental). Generated for planning; not a contractual offer. GPU and model prices fluctuate with supply and FX. © Goodland Cloud (concept document).
Related: implementation & support — Goodland-Cloud-Technical-Overview-Support; COGS, benchmarks & margins — Goodland-Cloud-Unit-Economics-Benchmarking (HTML/PDF).