HN CompanionHN Companion new | best | ask | show | jobs
Show HN: Cloud-cost-CLI – Find cloud $$ waste in AWS, Azure and GCP (github.com/vuhp)
4 points by vuhp 1 day ago | 3 comments
Hey HN! I built a CLI tool to find cost-saving opportunities in AWS, Azure, and GCP.

Why? Existing cost management tools are either expensive SaaS products or slow dashboards buried in cloud consoles. I wanted something fast, CLI-first, and multi-cloud that I could run in CI/CD or my terminal.

What it does: - Scans your cloud accounts and finds idle VMs, unattached volumes, oversized databases, unused resources - Returns a ranked list of opportunities with estimated monthly savings - 26 analyzers across AWS, Azure, and GCP - Read-only (never modifies infrastructure)

Key features: • HTML reports with interactive charts (new in v0.6.2) • AI-powered explanations (OpenAI or local Ollama) • Export formats: HTML, Excel, CSV, JSON, terminal • Multi-Cloud - AWS, Azure, and GCP support (26 analyzers)

Quick example: npm install -g cloud-cost-cli cloud-cost-cli scan --provider aws --output html

Real impact: One scan found $11k/year in savings (empty App Service Plan, over-provisioned CosmosDB, idle caches).

Technical stack: - TypeScript - AWS/Azure/GCP SDKs - Commander.js for CLI - Chart.js for HTML reports - Optional OpenAI/Ollama integration

Open source (MIT): https://github.com/vuhp/cloud-cost-cli npm: cloud-cost-cli

Would love feedback on: 1. What features would be most useful? 2. Should I add historical tracking (trends)? 3. Any missing cloud providers?

Happy to answer questions!



Historical tracking is useful, but only if it’s dead simple. If it requires standing up a DB or service, I probably won’t use it.

I've been trying to track down cloud waste recently after realizing EC2 was storing 10GB snapshots every 12 hours for the past 8 months and I didn't realize. Obviously not a crazy 20k bill, but still annoying -- especially at small scale.


Thanks! Actually I already built Option 1 (JSON files + compare command), just haven't published it yet. Will be in the next release.

But your comment made me think about Option 2 more - exporting metrics to existing monitoring stacks, OTel/Prometheus. For teams already using them, that might actually be "dead simple" since there's no new tool setup or learn.

I would appreciate any suggestions.


I'm not sure how AWS logs work since I'm just used to looking at Cost Explorer... But I would assume they actually store the history themselves, and you can just pull it.