Skip to main content

Usage Dashboard

ENTERPRISE

The Usage Dashboard provides visibility into AI usage across all your Enterprise seats.

Dashboard URL: getredline.io/usage

Accessing the Dashboard

  1. Navigate to getredline.io/usage
  2. Enter your Enterprise license key (master key or seat key)
  3. Click View Usage

Dashboard Overview

The Usage Dashboard displays:

Header Information

  • License key (partially masked)
  • License tier (Enterprise)
  • Time period selector (7, 30, or 90 days)

Summary Statistics

MetricDescription
Total TokensCombined input + output tokens
Input TokensTokens sent to the AI (prompts, context)
Output TokensTokens received from the AI (responses)
RequestsNumber of API requests made

Daily Usage Chart

A bar chart showing token usage over time:

  • X-axis: Date
  • Y-axis: Token count
  • Bars: Split by input (prompt) and output (completion) tokens

Hover over bars to see exact values for each day.

Usage by Seat

A table breaking down usage per seat:

ColumnDescription
SeatSeat number
UserEmail/identifier (if known)
Input TokensPrompt tokens used
Output TokensCompletion tokens used
Total TokensCombined token count
RequestsNumber of API requests

Understanding Token Usage

What are Tokens?

Tokens are the units AI models use to process text. Roughly:

  • 1 token = ~4 characters in English
  • 100 tokens = ~75 words
  • 1,000 tokens = ~750 words

Input vs Output Tokens

TypeDescriptionCost Factor
Input (Prompt)What you send to the AI: your message, selected nodes, conversation historyLower cost
Output (Completion)What the AI responds with: answers, created contentHigher cost

What Consumes Tokens?

  • Chat messages - Your prompts and AI responses
  • Context injection - Selected nodes added to prompts
  • Tool usage - AI using research tools
  • Summaries - AI-generated summaries for nodes and narratives
  • RSS filtering - AI evaluating article relevance

Time Period Selection

View usage for different periods:

PeriodUse Case
Last 7 daysRecent activity, spot unusual patterns
Last 30 daysMonthly usage, align with billing
Last 90 daysLong-term trends, capacity planning

AI Budget

Enterprise seats have a monthly AI budget:

ParameterValue
Budget per seat$8/month
Reset dateMonthly from activation
CurrencyUSD (based on model pricing)

Understanding Budget Consumption

The budget translates to tokens based on which AI models are used. Approximate tokens per $1:

ModelInput Tokens/$Output Tokens/$
Claude Sonnet 4~333K~66K
Claude 3.5 Haiku~1M~200K
GPT-4 Turbo~100K~33K

Cheaper models (like Haiku) stretch your budget further.

Budget Warnings

  • 80% usage - Users see a warning in Redline
  • 100% usage - AI features pause until next month
  • Reset - Budget resets monthly from license activation

Monitoring Best Practices

Weekly Check-ins

Review usage weekly to:

  • Identify power users who may need guidance
  • Spot unusual patterns (potential abuse)
  • Plan for capacity needs

Monthly Reports

At month end:

  • Compare against budget
  • Identify seats with low usage (might not need all seats)
  • Track trends over time

Set Expectations

Communicate with team members:

  • AI budget per seat
  • What counts as usage
  • When budget resets

Frequently Asked Questions

Is usage real-time?

Usage data may be delayed by up to 5 minutes. The dashboard shows approximate values.

Can I see individual requests?

No. The dashboard shows aggregate usage only. Individual prompts and responses are not visible for privacy.

What happens when budget is exceeded?

AI features pause for that seat until the next billing period. Core app functionality (boards, nodes, connections) continues to work.

Can I get more budget?

Contact sales@getredline.io for custom budget arrangements.

Why do some seats show zero usage?

The seat may:

  • Not be activated yet
  • Be using local AI (Ollama) instead of cloud AI
  • Have their own API keys configured (bypasses managed AI)
  • Simply not have used AI features in the selected period

How do I reduce usage?

Tips for efficient AI usage:

  • Use shorter, focused prompts
  • Select fewer nodes when chatting
  • Use faster/cheaper models when possible
  • Avoid unnecessary regeneration of summaries
  • Limit RSS feed polling frequency