top of page
for Desktop.png

Powering the Future of AI

Revolutionizing AI Training, Inference, and Performance

3d Logo Mockup on Building-A 1_edited.jpg

About Us

VISCOR Computing Corporation
  • VISCOR Computing Corporation specializes in the procurement, deployment, and operation of high-performance computing (HPC) infrastructure, serving the Asian market (Northeast Asia, Southeast Asia, Taiwan) and global cloud service providers (CSPs).

  • In its initial phase, the company has acquired 8 NVIDIA GB200 NVL72 (GPU+ARM architecture) and several NVIDIA DGX B200 (GPU+x86 architecture) systems.

  • We provide AI training, edge computing, and data processing solutions while focusing on sustainability and technological innovation.

VISCOR and NVIDIA Collaboration

Through close collaboration with NVIDIA, we deliver cutting-edge AI computing platforms for enterprises and data centers worldwide, ensuring optimal AI training performance at the best cost efficiency.

Key Innovations

Security
Local AI cloud computing with enhanced data security
Performance
High-performance AI training, enabling powerful AI applicatios
Stability
Low-latency and highly stable computing services from local AI infrastructure
Deployment
Flexible deployment options tailored to diverse AI workloads
for Desktop.png
Novy-projektcopy.png

NVIDIA
GB200 NVL72

Core Features

  • 72 high-efficiency Tensor Cores, specifically designed for large-scale AI training, enhancing computing efficiency and parallel processing

  • 80GB HBM3 memory with 2TB/s memory bandwidth, ensuring high-speed data access

  • Supports FP16, BFLOAT16, and INT8 precision, optimizing AI training and inference performance

  • Advanced thermal management technology, ensuring stable and continuous high-performance operation

AI Training

  • 3× faster than H100, ideal for Large Language Models (LLMs), deep learning, and high-performance computing (HPC)

  • Supports multi-node distributed training, increasing AI training throughput and scalability

  • Optimized for generative AI, delivering enhanced computing power for intensive AI models

AI Inference

  • Ultra-low latency inference, ideal for real-time AI applications

  • 2.5× faster than H100 and H200, significantly improving batch inference efficiency

  • High-performance processing for multi-layer neural network inference, supporting complex AI tasks

LLM Inference

  • 30× faster than NVIDIA H100 Tensor Core GPU

  • Supports ChatGPT, BERT, LLaMA, and other AI-generation and comprehension models

LLM Training

  • 4× faster than H100, drastically reducing AI training cycles

  • Designed for high-efficiency LLM training architectures, optimizing parameter tuning and computational efficiency

Energy Efficiency

  • 25× energy efficiency improvement over H100, reducing AI computing power consumption

  • Optimized power management, ensuring sustainable AI data center operations

Data Processing

  • 18× faster than traditional CPUs, significantly boosting data analytics and high-performance computing

  • Suitable for cloud computing, big data processing, and machine learning workloads

Scroll to view

AI Training Speed

AI Inference Speed

AI Training Speed

LLM Inference Speed

Energy Efficiency Improvement

Data processing speed

GB200NVL72

3× faster than H100

2.5× faster than H100

4× faster than H100

30× faster than H100

25× better than H100

18× faster than CPU

H100

Baseline

Baseline

Baseline

Baseline

Baseline

N/A

H200

1.5× faster than H100

2× faster than H100

N/A

N/A

N/A

N/A

AMD MI300X

2× faster than H100

2× faster than H100

N/A

N/A

20× better than H100

N/A

Intel Gaudi 2

2× faster than H100

1.8× faster than H100

N/A

N/A

18× better than H100

N/A

Performance Metrics

Scroll to view

Powering the Future of AI

Use Cases

Image by James Hunt
Smart Cities

AI-powered traffic monitoring|

Violation detection systems|

Urban data analytics|

Image by Maxim Hopman
Law Enforcement & Fraud Prevention

AI-based financial fraud analysis|Fraud analysis|Crime prediction and risk assessment

Image by Ben Koorengevel
Defense & Security

Virtual measurement improves product yield and achieves zero-defect production

Image by NOAA
Environmental & Climate Monitoring

AI-enhanced weather forecasting|Carbon emission analysis|Ecological monitoring

Image by Irwan
Healthcare AI

Medical imaging diagnostics|Pharmaceutical research|Biomedical data analysis

Solutions & Pricing

VISCOR Cluster Engine Pricing

Please contact us for a project quote

VISCOR Cluster Engine integrates Kubernetes-based AI training architectures, providing efficient GPU resource management for AI cloud computing and enterprise AI infrastructure.

  • Optimized on-premises AI computing environments

  • Automated load balancing and resource management

  • Ideal for government and enterprise AI training centers

On-Demand GPU Plan

NT$ 4,900 / GPU hour

Contact an Expert

Flexible GPU usage for short-term AI computation & training

  • GPU Specs: 8 × NVIDIA GB200 NVL72

  • Processors: Dual Intel 48-core CPUs

  • Total Memory: 2TB

  • Storage: 2 × 960GB NVMe SSD + 8 × 7.68TB NVMe SSD

  • GPU Connectivity: InfiniBand 400GB/s per GPU

  • Ethernet Speed: 100GB/s

Plans and Prices

Pricing in Progress.

Stay Tuned

Contact Us

Get in touch.

For inquiries about:

FAQ

  • We offer two GPU solutions: On-demand GPU and Reserved GPU, tailored to different AI training and inference needs.

    • Hourly billing (charged per full hour)

    • Compute-based billing (TFLOP-based pricing, optional)

    • API subscription model (billed per request or token usage)

  • Yes, we offer:

    • Long-term leasing discounts (for usage over 100 hours)

    • Special pricing for enterprise AI training projects

    • Government AI surveillance and HPC project collaborations

  • We currently provide NVIDIA GB200 NVL72 and support hybrid cluster operations with H100 and H200.

  • The GB200 NVL72 features 2TB/s memory bandwidth, offering 3× faster AI training and 30× higher LLM inference performance than H100.

  • Yes, VISCOR Cluster Engine supports Kubernetes GPU orchestration, automated load balancing, and distributed AI training.

  • We support a wide range of AI frameworks and development tools, including:

    TensorFlow, PyTorch, JAX, CUDA, cuDNN, TensorRT, and Triton
    Fully compatible with Docker and Kubernetes

  • Yes, we provide RESTful API and gRPC API for AI inference and GPU resource management.

  • Yes, our GPU servers support NVLink / NVSwitch and InfiniBand 400GB/s, ensuring low-latency and high-throughput AI training and inference.

  • Yes, VISCOR supports GPUDirect Storage, accelerating AI training data access and enhancing I/O performance.

  • Yes, we comply with ISO 27001, SOC 2, GDPR, and CCPA and offer on-premises AI computing solutions for government deployments.

  • No, we do not store or share enterprise AI training data. We provide dedicated GPU servers to ensure data privacy.

  • Yes, we support VPN and private cloud networking to ensure enterprise data security.

  • Yes, enterprises can monitor GPU utilization, memory status, and error logs via the management dashboard, with API-based monitoring support.

  • Yes, we offer dedicated AI training consultants for one-on-one AI model optimization guidance.

  • Yes, we offer enterprise AI training courses, MLOps hands-on training, and support developer technical assistance.

  • Yes, we provide enterprise AI training programs and MLOps implementation training, along with developer technical support.

Scroll to view

bottom of page