Skip to main content

AI Provider Configuration

Redline supports multiple AI providers for the research assistant. Configure your preferred provider in Settings > AI Provider.

Supported Providers

ProviderVisionToolsLocalNotes
Redline AIYesYesNoIncluded with license, no API key needed
Claude (Anthropic)YesYesNoBest overall with own API key
OpenAIYesYesNoStrong alternative
OpenRouterVariesVariesNoAccess many models
OllamaVariesVariesYesFully local, no API key

Redline AI

Redline AI is the managed AI service included with your Redline license. It provides access to powerful AI models without needing to manage your own API keys.

Features

  • No API key required - Works automatically with your license
  • Included usage budget - Monthly AI credits based on your tier
  • Curated models - Access to top-performing models for research
  • Zero configuration - Just select Redline AI and start working

Setup

  1. Go to Settings > AI Provider
  2. Select Redline AI
  3. That's it! Your license key handles authentication automatically.

Available Models

ModelBest For
Claude Sonnet 4General research, best for tool use
Claude 3.5 HaikuFast responses, cost-effective
Kimi K2.5Alternative for variety

Usage Budget

Your monthly AI budget depends on your license tier:

TierMonthly Budget
Personal$2
Pro$8
Enterprise$8 per seat

Budget resets monthly from your license activation date. When budget is exhausted, AI features pause until the next month.

Checking Usage

  • In-app: Settings > AI Provider shows remaining budget
  • Enterprise: Use the Usage Dashboard for detailed tracking
Budget Tips
  • Use Claude 3.5 Haiku for routine tasks (cheaper per token)
  • Use Claude Sonnet 4 for complex research
  • Limit context by selecting fewer nodes when chatting

When to Use Your Own API Key

You might prefer your own API key if you:

  • Need unlimited usage beyond your budget
  • Want access to specific models not in Redline AI
  • Have organizational API key requirements
  • Prefer to run models locally (Ollama)

You can switch between Redline AI and other providers anytime in Settings.


Claude (Anthropic)

Claude is the recommended provider for Redline. It offers excellent research capabilities and tool use.

Authentication Options

1. OAuth Login (Easiest)

  1. Go to Settings > AI Provider
  2. Select "Claude"
  3. Click "Sign in with Claude"
  4. Authorize Redline in your browser
  5. You're connected!

2. API Key

  1. Get an API key from console.anthropic.com
  2. Go to Settings > AI Provider
  3. Select "Claude"
  4. Enter your API key
  5. Click "Save"
ModelBest For
claude-sonnet-4-20250514General use, best balance
claude-3-5-haiku-20241022Fast, cost-effective

Token Limits

ModelContext Window
Claude Sonnet 4200K tokens
Claude 3.5 Haiku200K tokens

OpenAI

OpenAI provides GPT-4 and other models with strong capabilities.

Setup

  1. Get an API key from platform.openai.com
  2. Go to Settings > AI Provider
  3. Select "OpenAI"
  4. Enter your API key
  5. Click "Save"
ModelBest For
gpt-4-turboBest capability
gpt-4oFast, vision support
gpt-3.5-turboCost-effective

OpenRouter

OpenRouter provides access to many models through a single API, including models from Anthropic, OpenAI, Meta, Google, and more.

Setup

  1. Get an API key from openrouter.ai
  2. Go to Settings > AI Provider
  3. Select "OpenRouter"
  4. Enter your API key
  5. Select a model from the list
  6. Click "Save"

Available Models

OpenRouter provides access to 100+ models. Some popular options:

ModelProviderNotes
claude-3-5-sonnetAnthropicVia OpenRouter
gpt-4-turboOpenAIVia OpenRouter
llama-3.1-70bMetaOpen weights
gemini-proGoogleGoogle's model

Pricing

OpenRouter charges per token based on the model used. Check openrouter.ai/models for current pricing.


Ollama (Local)

Ollama runs AI models locally on your machine. No API key or internet required for AI inference.

Setup

  1. Install Ollama from ollama.ai
  2. Pull a model:
    ollama pull llama3.1
  3. Start Ollama (it runs as a background service)
  4. Go to Settings > AI Provider
  5. Select "Ollama"
  6. Select your model
  7. Click "Save"
ModelSizeNotes
llama3.1:8b~5GBGood for most machines
llama3.1:70b~40GBBest quality, needs GPU
mistral:7b~4GBFast, efficient
mixtral:8x7b~26GBStrong performance

Hardware Requirements

Model SizeRAMGPU VRAM
7B8GB+6GB+
13B16GB+10GB+
70B64GB+40GB+
Performance

For best Ollama performance, use a GPU with sufficient VRAM. CPU inference is slower but works.

Custom Ollama URL

If Ollama runs on a different machine or port:

  1. Go to Settings > AI Provider > Ollama
  2. Enter the custom URL (e.g., http://192.168.1.100:11434)
  3. Save and verify connection

Provider Comparison

Capability Matrix

FeatureRedline AIClaudeOpenAIOpenRouterOllama
Tool UseYesYesYesModel-dependentModel-dependent
VisionYesYesYesModel-dependentSome models
Long ContextYesYesYesModel-dependentLimited
StreamingYesYesYesYesYes
OfflineNoNoNoNoYes
API Key RequiredNoYesYesYesNo

Cost Comparison

ProviderApproximate Cost
Redline AIIncluded with license ($2-8/month)
Claude Sonnet$3/$15 per 1M tokens (in/out)
GPT-4 Turbo$10/$30 per 1M tokens
OpenRouterVaries by model
OllamaFree (local compute)

Recommendations

  • Easiest setup: Redline AI (included, no configuration)
  • Unlimited usage: Claude or OpenAI (with your API key)
  • Privacy-focused: Ollama (fully local)
  • Model variety: OpenRouter
  • OpenAI ecosystem: OpenAI

Redline AI for Enterprise

Enterprise licenses get enhanced Redline AI features:

  • Per-seat budgets - Each seat gets $8/month
  • Usage dashboard - Track usage across all seats at getredline.io/usage
  • Centralized management - Monitor team usage from one place

Enterprise users can still configure their own API keys if they prefer to bypass Redline AI.

See Enterprise Usage Dashboard for more details.


Troubleshooting

"Invalid API Key"

  • Verify the key is correct (no extra spaces)
  • Check the key hasn't expired
  • Ensure you have API access enabled on your account

"Model not available"

  • The model may be deprecated
  • Check you have access to that model tier
  • Try a different model

"Rate limit exceeded"

  • Wait a few minutes and retry
  • Consider upgrading your API tier
  • Reduce request frequency

Ollama not connecting

  • Verify Ollama is running: ollama list
  • Check the URL is correct (default: http://localhost:11434)
  • Ensure no firewall is blocking the connection

Slow responses

  • Try a smaller/faster model
  • Check your internet connection
  • For Ollama, verify GPU is being used