Ollama Benchmark - Compare LLMs Locally icon

Ollama Benchmark - Compare LLMs Locally

Extension Actions

How to install Open in Chrome Web Store
CRX ID
nodepdbjokbfbmjcknjhpdciphegjicd
Description from extension meta

Benchmark and compare LLMs running on Ollama. Analyze token performance, export results, and optimize your AI workflows.

Image from store
Ollama Benchmark - Compare LLMs Locally
Description from store

๐Ÿ” Benchmark and compare performance across LLMs (Large Language Models) like Mistral, LLaMA, Qwen, and others โ€“ powered by Ollama. This Chrome Extension allows you to test multiple models simultaneously and export detailed performance results.

๐Ÿง  Features:
- ๐Ÿ“Œ Select one or multiple models to compare
- ๐Ÿงช Run prompt-based benchmark tests
- ๐Ÿ“Š Analyze token count, response time, and speed (Token/s)
- ๐Ÿ’พ Export results in `.txt`, `.csv`, or `.json` format
- ๐Ÿ—‚ Local storage of settings and results
- ๐ŸŒ Works with both local and remote Ollama APIs
- ๐ŸŒ Multilingual interface (English & Turkish)

โ˜• Like the tool? Support the developer:
https://www.buymeacoffee.com/elroy

๐Ÿ” No data is collected. All processing happens in your browser.
๐Ÿ“ฆ 100% free to use.