Your AI habits,
measured honestly
Drag the sliders to match what you typically ask the AI to do. Output type has a big impact — image generation and deep reasoning cost far more energy than simple chat.
Per-query impact,
every major model
Based on a single average query (~300 token input + ~300 token output). Click any row to jump back and use that model.
| Model | CO₂e / query | Energy / query | Water / query | Tier |
|---|
Transparent maths
Base energy (Wh/query)
Each model has a baseline Wh/query from published benchmarks or official disclosures. GPT-4o: 0.42 Wh (arXiv:2505.09598). Gemini: 0.24 Wh median (Google 2025 report). OpenAI: 0.34 Wh average (Sam Altman, June 2025). Mistral: derived from 1.14 g CO₂e/400-token LCA (Carbone 4/ADEME 2025).
Length multipliers
Energy scales roughly linearly with total token count. Input length multipliers range from 0.4× (tweet, ~20 words) to 2.5× (document, ~900 words). Response length adds a separate multiplier (0.4×–2.5×). The combined average adjusts the base Wh/query accordingly.
Output type multipliers
Conversation / Q&A: 1.0×. Writing & summarising: 1.2×. Code generation: 1.4× (longer output). Research / long docs: 1.8×. Image generation: 4.0× (0.3–1.2 Wh per image regardless of model). Deep reasoning: 3.5× (internal chain-of-thought token generation adds massive hidden cost).
Carbon & water
CO₂: 300 g/kWh blended global grid intensity — conservative, partially accounts for cloud provider renewables (Google ~66% CFE in 2024). Water: 1.2 ml/Wh based on Google's published WUE of 1.15 L/kWh plus cooling tower evaporation losses. Sources: Google Env Report 2025, Epoch AI 2025.