NVIDIA GPU For Machine Learning

The NVIDIA Transformation from Gaming to Machine Learning

Jensen Huang is a Taiwanese American.

NVIDIA was started by Jensen Huang after he quitted his job at AMD.

Both AMD and NVIDIA are customers of Taiwaness TSMC Semiconductor.

Early 2020, NVIDIA graphic cards start to be experimentally ultilized by AI and Machine Learning researchers to accelerate their Matrix Multiplication and Optimization for their Neural Network

Students, researchers and engineers not just gamers and video editors started to buy NVIDIA graphic card for the Machine Learning researchs and AI products.

After the explosive demand growth from AI startups since 2020, NVIDIA starts to design their Graphic Card products for the AI research community not just Gamers and Video Editor.

They come up with 3 main product lines:

  • The upgraded Consumer RTX GPU series like RTX4090, RTX4080, RTX3090, RTX3080 that could be used both for Gaming, Video Editing and Artificial Intelligence Research. These GPU has the price range from $360 to $1800.

  • The DataCenter GPU like H100, A100, L4 specificially designed for heavy Machine Learning and Data Analytics workload catered to Startups and AI Research team in large corporation like Facebook Meta, Microsoft, OpenAI, etcs. These GPU has the price range from $3000 to $60000.

  • AI Engineer Power Workstation like the newest NVIDIA DGX Spark

All these newest NVIDIA productines were built on top of TSMC Cutting Edge 4 nanometer fabrication process using EUV Extreme Violet lazers.

Before 2022, majority of NVIDIA revenue was still coming from Desktop Gaming market but after 2022, with large acquisition of GPUs from Facebook Meta AI research, Microsoft Coding Copilot and OpenAI ChatGPT, NVIDIA revenue from the Data Center market has surpassed its Desktop Gaming market.

Starting out as just productivity improvement tools for the niche market of Graphic Designers and Video Editors of the Hollywood Movie Studios, now NVIDIA Graphic Processing Unit GPU through its AI application servers could potentially quantum leap the productivity improvement for every industries from Agriculture and Engine Design to Financial Trading and Architecture Rendering.

As of 2025, NVIDIA has surpassed Apple and Microsoft to become the world most valuable public company with market capitalization of more than 4 trillion dollar.

Remote Access to Cloud GPU

Access to NVIDIA L4 and A100 is quick, easy and affordable with the GCloud Command Line Interface CLI

# Example: Create an A100 instance
$ gcloud compute instances create my-a100 \
  --zone=us-central1-a \
  --machine-type=a2-highgpu-1g \
  --accelerator type=nvidia-tesla-a100,count=1 \
  --image-family=common-cu121-debian-12 \
  --image-project=deeplearning-platform-release

# SSH into the VM
$ gcloud compute ssh my-a100

Cost Approximation

  • L4: ~$0.60 – $0.75/hr
  • A100 (40 GB): ~$1.90/hr
  • A100 (80 GB): ~$3.80/hr
  • H100 (80 GB): ~$4.80 – $6.50/hr

Self-Build a Local ML Research Machine

  • GPU: RTX 4090 (cheapest new AIB or good used unit you can verify).

  • Motherboard: Intel i5-13600K (+ B760 motherboard) — cheaper than higher-end CPUs but still fast for ML pipelines.

  • RAM: 64 GB DDR5 (2×32). Best Buy

  • Storage: 2TB NVMe Gen4 (e.g., WD SN850X / Samsung 990 Pro).

  • Power Supply: Corsair RM1000e / similar 1000–1200W Gold PSU.

  • Casing: Mid-tower case + 240 AIO (or big air cooler).

  • OS: Linux (Ubuntu + CUDA/cuDNN + drivers).

Want to Receive Updates On Fastest AI Models, Successful AI Startups and New Hiring Candidates. Subscribe To My Newsletters
Subscribe