AI Provider Configuration
Redline supports multiple AI providers for the research assistant. Configure your preferred provider in Settings > AI Provider.
Supported Providers
| Provider | Vision | Tools | Local | Notes |
|---|---|---|---|---|
| Redline AI | Yes | Yes | No | Included with license, no API key needed |
| Claude (Anthropic) | Yes | Yes | No | Best overall with own API key |
| OpenAI | Yes | Yes | No | Strong alternative |
| OpenRouter | Varies | Varies | No | Access many models |
| Ollama | Varies | Varies | Yes | Fully local, no API key |
Redline AI
Redline AI is the managed AI service included with your Redline license. It provides access to powerful AI models without needing to manage your own API keys.
Features
- No API key required - Works automatically with your license
- Included usage budget - Monthly AI credits based on your tier
- Curated models - Access to top-performing models for research
- Zero configuration - Just select Redline AI and start working
Setup
- Go to Settings > AI Provider
- Select Redline AI
- That's it! Your license key handles authentication automatically.
Available Models
| Model | Best For |
|---|---|
| Claude Sonnet 4 | General research, best for tool use |
| Claude 3.5 Haiku | Fast responses, cost-effective |
| Kimi K2.5 | Alternative for variety |
Usage Budget
Your monthly AI budget depends on your license tier:
| Tier | Monthly Budget |
|---|---|
| Personal | $2 |
| Pro | $8 |
| Enterprise | $8 per seat |
Budget resets monthly from your license activation date. When budget is exhausted, AI features pause until the next month.
Checking Usage
- In-app: Settings > AI Provider shows remaining budget
- Enterprise: Use the Usage Dashboard for detailed tracking
- Use Claude 3.5 Haiku for routine tasks (cheaper per token)
- Use Claude Sonnet 4 for complex research
- Limit context by selecting fewer nodes when chatting
When to Use Your Own API Key
You might prefer your own API key if you:
- Need unlimited usage beyond your budget
- Want access to specific models not in Redline AI
- Have organizational API key requirements
- Prefer to run models locally (Ollama)
You can switch between Redline AI and other providers anytime in Settings.
Claude (Anthropic)
Claude is the recommended provider for Redline. It offers excellent research capabilities and tool use.
Authentication Options
1. OAuth Login (Easiest)
- Go to Settings > AI Provider
- Select "Claude"
- Click "Sign in with Claude"
- Authorize Redline in your browser
- You're connected!
2. API Key
- Get an API key from console.anthropic.com
- Go to Settings > AI Provider
- Select "Claude"
- Enter your API key
- Click "Save"
Recommended Models
| Model | Best For |
|---|---|
| claude-sonnet-4-20250514 | General use, best balance |
| claude-3-5-haiku-20241022 | Fast, cost-effective |
Token Limits
| Model | Context Window |
|---|---|
| Claude Sonnet 4 | 200K tokens |
| Claude 3.5 Haiku | 200K tokens |
OpenAI
OpenAI provides GPT-4 and other models with strong capabilities.
Setup
- Get an API key from platform.openai.com
- Go to Settings > AI Provider
- Select "OpenAI"
- Enter your API key
- Click "Save"
Recommended Models
| Model | Best For |
|---|---|
| gpt-4-turbo | Best capability |
| gpt-4o | Fast, vision support |
| gpt-3.5-turbo | Cost-effective |
OpenRouter
OpenRouter provides access to many models through a single API, including models from Anthropic, OpenAI, Meta, Google, and more.
Setup
- Get an API key from openrouter.ai
- Go to Settings > AI Provider
- Select "OpenRouter"
- Enter your API key
- Select a model from the list
- Click "Save"
Available Models
OpenRouter provides access to 100+ models. Some popular options:
| Model | Provider | Notes |
|---|---|---|
| claude-3-5-sonnet | Anthropic | Via OpenRouter |
| gpt-4-turbo | OpenAI | Via OpenRouter |
| llama-3.1-70b | Meta | Open weights |
| gemini-pro | Google's model |
Pricing
OpenRouter charges per token based on the model used. Check openrouter.ai/models for current pricing.
Ollama (Local)
Ollama runs AI models locally on your machine. No API key or internet required for AI inference.
Setup
- Install Ollama from ollama.ai
- Pull a model:
ollama pull llama3.1 - Start Ollama (it runs as a background service)
- Go to Settings > AI Provider
- Select "Ollama"
- Select your model
- Click "Save"
Recommended Models
| Model | Size | Notes |
|---|---|---|
| llama3.1:8b | ~5GB | Good for most machines |
| llama3.1:70b | ~40GB | Best quality, needs GPU |
| mistral:7b | ~4GB | Fast, efficient |
| mixtral:8x7b | ~26GB | Strong performance |
Hardware Requirements
| Model Size | RAM | GPU VRAM |
|---|---|---|
| 7B | 8GB+ | 6GB+ |
| 13B | 16GB+ | 10GB+ |
| 70B | 64GB+ | 40GB+ |
For best Ollama performance, use a GPU with sufficient VRAM. CPU inference is slower but works.
Custom Ollama URL
If Ollama runs on a different machine or port:
- Go to Settings > AI Provider > Ollama
- Enter the custom URL (e.g.,
http://192.168.1.100:11434) - Save and verify connection
Provider Comparison
Capability Matrix
| Feature | Redline AI | Claude | OpenAI | OpenRouter | Ollama |
|---|---|---|---|---|---|
| Tool Use | Yes | Yes | Yes | Model-dependent | Model-dependent |
| Vision | Yes | Yes | Yes | Model-dependent | Some models |
| Long Context | Yes | Yes | Yes | Model-dependent | Limited |
| Streaming | Yes | Yes | Yes | Yes | Yes |
| Offline | No | No | No | No | Yes |
| API Key Required | No | Yes | Yes | Yes | No |
Cost Comparison
| Provider | Approximate Cost |
|---|---|
| Redline AI | Included with license ($2-8/month) |
| Claude Sonnet | $3/$15 per 1M tokens (in/out) |
| GPT-4 Turbo | $10/$30 per 1M tokens |
| OpenRouter | Varies by model |
| Ollama | Free (local compute) |
Recommendations
- Easiest setup: Redline AI (included, no configuration)
- Unlimited usage: Claude or OpenAI (with your API key)
- Privacy-focused: Ollama (fully local)
- Model variety: OpenRouter
- OpenAI ecosystem: OpenAI
Redline AI for Enterprise
Enterprise licenses get enhanced Redline AI features:
- Per-seat budgets - Each seat gets $8/month
- Usage dashboard - Track usage across all seats at getredline.io/usage
- Centralized management - Monitor team usage from one place
Enterprise users can still configure their own API keys if they prefer to bypass Redline AI.
See Enterprise Usage Dashboard for more details.
Troubleshooting
"Invalid API Key"
- Verify the key is correct (no extra spaces)
- Check the key hasn't expired
- Ensure you have API access enabled on your account
"Model not available"
- The model may be deprecated
- Check you have access to that model tier
- Try a different model
"Rate limit exceeded"
- Wait a few minutes and retry
- Consider upgrading your API tier
- Reduce request frequency
Ollama not connecting
- Verify Ollama is running:
ollama list - Check the URL is correct (default:
http://localhost:11434) - Ensure no firewall is blocking the connection
Slow responses
- Try a smaller/faster model
- Check your internet connection
- For Ollama, verify GPU is being used