LLM Copy Buttons + RAG Library
Extension Actions
- Extension status: In-App Purchases
Extracts main content and prepares Markdown/Plain text for LLMs on demand.
LLM Copy Buttons helps turn web pages into clean, useful content for AI models.
โ
What problem does the extension solve:
When an LLM reads a website, it sees a lot of noise: menus, ads, buttons, footers, and other clutter.
As a result, the model loses context, gets distracted by irrelevant blocks, and may answer inaccurately.
The extension lets you extract clean, structured information from any page in one click. Clean input means better output.
๐ Whatโs included in the free version:
Copy for LLM
Copies a page in an AI-friendly format: title, URL, key metadata, and clean text.
View Markdown
Shows a structured version of the text: headings, lists, code blocks, and tables.
View Plain Text
Shows plain text without formatting.
Open in ChatGPT / Claude / Perplexity
Quickly opens the selected service in a new tab and puts it into context.
Save to Library
Saves the page to your local library.
Library
A convenient list of saved materials: search, select, delete, export.
๐ What the Pro version adds:
Full RAG Library
Save articles, documentation, READMEs, and other sources in Library, then work with them as one unified knowledge set.
Bulk extraction from a URL list
Paste a list of links into Extract URL list, and the extension processes them one by one.
This is useful for quickly building a dataset.
Export a ready-to-use RAG package (ZIP)
Pro lets you export your library without limits in a single archive.
Inside, you get everything needed for downstream processing:
๐ markdown/ โ structured page texts (.md)
๐ plain/ โ plain text (.txt)
๐งฉ meta/ โ source metadata (.json)
๐งฎ pages.jsonl โ pages in pipeline-ready format
๐ง chunks.jsonl โ vectorization-ready segments
โ๏ธ manifest.json โ export package description
Smart Chunking
For RAG, it is important not only to split text, but to preserve meaning across boundaries.
Library settings include:
chunk size (number of characters)
overlap (chunk overlap)
This helps produce more stable retrieval and more accurate LLM answers.
๐ฅ Who Pro is for
people building an internal knowledge base
teams preparing content for AI assistants
users who need regular export of large datasets
developers building RAG solutions
๐ Privacy and security
works only on explicit user actions
no hidden background tracking
no page content sent to third-party servers
minimal browser permissions