ARM· IMMORTALIS GEN3

Google Tensor G3 GPU

VRAM
12 GB
ENTRY-LEVEL
BANDWIDTH
34
GB/S
MODELS Q4
194/331
59%
7B Q4 SPEED
~4
SLOW
▸ MODEL COVERAGE @ Q459% OF ALL
▸ ESTIMATED SPEED· BY MODEL SIZE @ Q4

Average speeds at Q4 quantization. Actual performance varies by model architecture and context length.

3B
~10
TOK/S
7B
~4
TOK/S
14B
~2
TOK/S
32B
18.0GB NEEDED
70B
39.4GB NEEDED
▸ SPECIFICATIONS
VRAM
12 GB
BANDWIDTH
34 GB/s
FP16 COMPUTE
1.5 TFLOPS
TDP
7W
MEMORY
Shared
ARCHITECTURE
Immortalis Gen3
18
FAST MODELS · >30 TOK/S
Real-time chat speed
64
USABLE · >10 TOK/S
Comfortable for all tasks
194
TOTAL COMPATIBLE
Fit in VRAM at Q4
▸ DON’T WANT TO BUY?

Test Google Tensor G3 GPU (or anything bigger) without committing. Pay by the second, cancel anytime.

Spin up in ~60s. Pay by the second. Cancel anytime.

Some links are affiliate links — we may earn a small commission at no extra cost to you. This helps keep FitMyLLM free and independent.

▸ COMPATIBLE MODELS· 194
S
nomic-embed-text-v1.5 100M0.1B
EMBEDDING·8K CTX· CHAT
302
TOK/S · 5% VRAM
S
GPT-2 124M0.124B
GPT2·1K CTX· CHAT
244
TOK/S · 5% VRAM
S
SmolLM2 135M0.135B
SMOLLM·2K CTX· CHAT
224
TOK/S · 5% VRAM
S
bge-large-en-v1.5 335M0.335B
EMBEDDING·1K CTX· CHAT
90
TOK/S · 6% VRAM
S
mxbai-embed-large-v10.335B
EMBEDDING·1K CTX· EMBEDDING
90
TOK/S · 6% VRAM
S
Snowflake Arctic Embed L0.335B
EMBEDDING·1K CTX· EMBEDDING
90
TOK/S · 6% VRAM
S
GPT-2 Medium 345M0.345B
GPT2·1K CTX· CHAT
88
TOK/S · 6% VRAM
S
SmolLM2 360M0.36B
SMOLLM·8K CTX· CHAT
84
TOK/S · 6% VRAM
S
Falcon-H1 0.5B0.5B
FALCON·128K CTX· CHAT
60
TOK/S · 7% VRAM
S
Qwen 1.5 0.5B0.5B
QWEN·32K CTX· CHAT
60
TOK/S · 7% VRAM
S
Qwen 2.5 0.5B0.5B
QWEN·32K CTX· CHAT
60
TOK/S · 7% VRAM
A
BGE-M30.568B
EMBEDDING·8K CTX· EMBEDDING
53
TOK/S · 7% VRAM
A
Qwen3 0.6B0.6B
QWEN·32K CTX· CHAT· REASONING
50
TOK/S · 7% VRAM
B
GPT-2 Large 774M0.774B
GPT2·1K CTX· CHAT
39
TOK/S · 8% VRAM
B
Qwen 3.5 0.8B0.8B
QWEN·256K CTX· CHAT· CODING· MULTILINGUAL
38
TOK/S · 8% VRAM
B
Qwen3.5-0.8B0.9B
QWEN·256K CTX· CHAT
34
TOK/S · 9% VRAM
B
Falcon3-1B1B
FALCON·32K CTX· CHAT
30
TOK/S · 9% VRAM
B
InternLM2 1B1B
INTERNLM·32K CTX· CHAT
30
TOK/S · 9% VRAM
B
TinyLlama 1.1B1.1B
LLAMA·2K CTX· CHAT
27
TOK/S · 10% VRAM
B
LFM2.5-1.2B-Thinking1.2B
LFM·122K CTX· CHAT· REASONING· TOOL_USE
25
TOK/S · 10% VRAM
B
Llama-3.2-1B1.2B
LLAMA·4K CTX· CHAT
25
TOK/S · 10% VRAM
C
DeepSeek Coder 1.3B1.3B
DEEPSEEK·16K CTX· CODING
23
TOK/S · 11% VRAM
C
EXAONE-4.0-1.2B1.3B
EXAONE·64K CTX· CHAT
23
TOK/S · 11% VRAM
C
OPT 1.3B1.3B
OPT·2K CTX· CHAT
23
TOK/S · 11% VRAM
C
Phi-1 1.3B1.3B
PHI·2K CTX· CODING
23
TOK/S · 11% VRAM
C
Phi-1.5 1.3B1.3B
PHI·2K CTX· CHAT· CODING
23
TOK/S · 11% VRAM
C
granite-4.0-h-tiny 6.9B6.9BMoE
GRANITE·128K CTX· CHAT
20
TOK/S · 39% VRAM
C
Falcon-H1 1.5B1.5B
FALCON·128K CTX· CHAT· CODING
20
TOK/S · 12% VRAM
C
GPT-2 XL 1.5B1.5B
GPT2·1K CTX· CHAT
20
TOK/S · 12% VRAM
C
Qwen2.5-Coder-1.5B1.5B
QWEN·32K CTX· CHAT· TOOL_USE· CODING
20
TOK/S · 12% VRAM
C
Qwen2 Math 1.5B1.5B
QWEN·4K CTX· REASONING
20
TOK/S · 12% VRAM
C
Qwen 2.5 1.5B1.5B
QWEN·32K CTX· CHAT· CODING
20
TOK/S · 12% VRAM
C
Yi Coder 1.5B1.5B
YI·125K CTX· CODING
20
TOK/S · 12% VRAM
C
stablelm-2-1_6b1.6B
STABLELM·4K CTX· CHAT
19
TOK/S · 12% VRAM
C
SmolLM2 1.7B1.71B
SMOLLM·8K CTX· CHAT
18
TOK/S · 13% VRAM
C
Qwen3 1.7B1.7B
QWEN·32K CTX· CHAT· REASONING
18
TOK/S · 13% VRAM
C
Qwen 1.5 1.8B1.8B
QWEN·32K CTX· CHAT
17
TOK/S · 13% VRAM
C
Moondream2 1.9B1.9B
OTHER·2K CTX· VISION· CHAT
16
TOK/S · 14% VRAM
C
Gemma 1 2B2B
GEMMA·8K CTX· CHAT
15
TOK/S · 14% VRAM
C
Granite 3.0 2B2B
GRANITE·128K CTX· CHAT· CODING
15
TOK/S · 14% VRAM
▸ NEXT STEP

Get personalized recommendations.

See ranked models with benchmark scores, run commands, and precise speed estimates for your Google Tensor G3 GPU.

▸ DEVICE UNDER TEST

Google Tensor G3 GPU12 GB VRAM.

GOOGLE TENSOR G3 GPU SPEC
BRAND
Apple
VRAM
12 GB Shared
BANDWIDTH
34 GB/s
FP16 COMPUTE
1.5 TFLOPS
TDP
7 W
ARCHITECTURE
Immortalis Gen3
▸ AI CAPABILITY
194/ 331 models @ Q4

With 12 GB VRAM and 34 GB/s bandwidth, this GPU handles models up to 16.8B parameters.

Speed ≈ bandwidth / model_size × efficiency. A 7B model at Q4 runs at ~4 tok/s.

§ 01TOP MODELS FOR GOOGLE TENSOR G3 GPU
194 FIT · SHOWING 20
MODELSIZEVRAM Q4TOK/SAVG
Ling-lite 16.8B16.8B10.8 GB13
DeepSeek V2 Lite 16B16B10.3 GB1338.0
DeepSeek-Coder-V2-Lite 15.7B15.7B10.1 GB1343.0
DeepSeek-VL2 Small 16B15.7B10.1 GB1343.1
StarCoder 15B15.5B10.0 GB221.0
StarCoder2 15B15B9.7 GB226.5
DeepSeek R1 Distill Qwen 14B14.8B9.5 GB243.9
DeepCoder 14B14.8B9.5 GB238.7
Qwen2.5-Coder-14B14.8B9.5 GB241.3
Qwen2.5-14B14.8B9.5 GB241.3
Qwen3 14B14.8B9.5 GB245.7
Ministral 3 14B14B9.0 GB225.9
Phi-3-medium-14b14B9.0 GB233.7
phi-4 14B14B9.0 GB233.7
Phi-4-reasoning 14B14B9.0 GB233.7
Phi-4-multimodal 14B14B9.0 GB242.0
Qwen 1.5 14B14B9.0 GB241.3
LLaVA-1.5 13B13.1B8.5 GB252.1
Baichuan2 13B13B8.4 GB223.6
Llama 2 13B13B8.4 GB219.7