Auto-Enrichment
Redline automatically enriches your nodes with additional data from external sources. This happens in the background so you can keep working while nodes are enhanced.
How It Works
When you create certain node types, Redline queues enrichment jobs that run in the background:
- You create a node (or the AI creates one)
- Enrichment jobs are queued based on node type
- Jobs process in the background (3 at a time, with delays)
- Node updates automatically with new data
Enrichment Types
Wikipedia Enrichment
Applies to: Actor, Organization
When you create an Actor or Organization node:
- Redline searches Wikipedia for a matching article
- Downloads the summary and thumbnail image
- Populates the description field
- Stores the Wikipedia URL for reference
Example:
Create an Actor node for "Volodymyr Zelenskyy"
Redline fetches:
- Summary from Wikipedia
- Profile photo
- Key biographical details
Article Extraction
Applies to: Newsfeed
When you add a news article URL:
- Extracts the full article text
- Parses author and publish date
- Downloads the featured image
- Generates an AI summary (if enabled)
Example:
Paste a news article URL
Redline extracts:
- Full article content
- Author name
- Publication date
- Featured image
Social Media Enrichment
Applies to: Social
When you create a Social node from a URL:
- Fetches the full post content
- Downloads author profile info
- Captures engagement metrics (likes, reposts)
- Extracts media attachments
- Resolves quoted posts
Supported Platforms:
- Twitter/X
- Bluesky
Social media enrichment requires you to authenticate with the platform in Settings. Without authentication, only public data is accessible.
OpenGraph Metadata
Applies to: Embed, Social (for links within posts)
For web URLs:
- Fetches OpenGraph metadata (title, description, image)
- Creates a preview card
- Downloads the preview image
Media Archiving
Applies to: All nodes with media
Media files (images, videos, avatars) are:
- Downloaded to local storage for offline access
- Referenced by local path after archiving
- Preserved even if source goes offline
Enrichment Queue
Enrichment jobs are processed with care to avoid overwhelming sources:
| Setting | Value |
|---|---|
| Concurrent jobs | 3 |
| Delay between jobs | 500ms |
| Queue check interval | 5 seconds |
| Priority levels | 1-10 (lower = higher priority) |
Job Priorities
| Type | Priority | Description |
|---|---|---|
media_archive | 4 | Download media files |
actor_profile | 5 | Wikipedia for actors |
org_profile | 5 | Wikipedia for organizations |
newsfeed_scrape | 5 | Article extraction |
rssfeed_poll | 5 | RSS feed polling |
social_link_enrich | 6 | OpenGraph for links |
Viewing Enrichment Status
Nodes display enrichment status in the Context Panel:
- Pending - Job queued, waiting to process
- Processing - Currently fetching data
- Complete - Enrichment successful
- Failed - Enrichment failed (may retry)
Manual Re-Enrichment
If enrichment failed or data is stale:
- Select the node
- Open Context Panel
- Click "Re-enrich" or "Refresh"
This re-queues the enrichment jobs.
RSS Feed Polling
RSS Feed nodes have special enrichment behavior:
- Initial Poll - When created, immediately polls the feed
- Scheduled Polls - Polls at configured interval (e.g., every 15 minutes)
- AI Filtering - Each article is evaluated against your relevance prompt
- Node Creation - Matching articles become Newsfeed nodes on your board
Configurable Settings:
- Poll interval (1 minute to 24 hours)
- Relevance prompt (what topics are you tracking?)
- Relevance threshold (0-100%, how strict?)
Be specific about what you're looking for:
"Articles about Tesla stock price, Elon Musk business decisions, or electric vehicle market trends"
This works better than vague prompts like:
"Interesting technology news"
Performance Considerations
Enrichment runs in the background and shouldn't impact your workflow. However:
- Large imports - Adding many nodes at once queues many jobs
- Rate limits - Some sources may rate-limit requests
- Network issues - Jobs retry on failure but may eventually fail
For large-scale data imports, consider adding nodes in batches.
Privacy Note
Enrichment makes external requests to:
- Wikipedia API
- Target URLs (for scraping)
- Social media APIs (when authenticated)
Your investigation structure (which nodes exist, how they're connected) is never sent externally. Only individual URLs and search queries are made.