A powerful native macOS GUI application for Eldric - your local AI assistant powered by Ollama with advanced tool-calling capabilities, model management, and AI development workbenches.
Chat with local LLMs through Ollama with full tool execution capabilities. Execute code, browse files, search the web, and more - all running on your own hardware with complete privacy.
Fine-tune and customize models with advanced training options. Configure LoRA adapters, manage training datasets, and monitor training progress in real-time.
Visual flow designer for creating multi-agent workflows. Connect specialized agents (Coder, Explorer, Planner, etc.) to build complex AI pipelines.
Mixture of Experts configuration and management. Define expert routing, load balancing strategies, and optimize inference across multiple specialized models.
Merge multiple models using advanced techniques: SLERP, TIES, DARE, Linear interpolation, and Task Arithmetic. Save and reuse merge recipes.
Train models with cutting-edge reasoning techniques: COCONUT, Quiet-STaR, Pause Tokens, Hidden CoT Distillation, and DeepSeek DSA.
Build Model Context Protocol servers with the integrated IDE. Templates for Python and Node.js, tool definition editor, and one-click deployment.
Build and query vector databases for retrieval-augmented generation. Import documents, configure chunking strategies, and enhance model responses with your data.
Connect to SQLite, PostgreSQL, MySQL, and DB2 databases. Execute queries, browse schemas, and let AI analyze your data.
Explore the Eldric GUI interface and features through our comprehensive screenshot gallery.
The main chat interface with conversation sidebar, model selector, and tool execution panel.
Configure general application preferences and behavior.
Default model selection, temperature, and inference parameters.
Ollama server URL, proxy configuration, and API endpoints.
Model Context Protocol server configuration and tool management.
System prompt templates and default instructions.
Conversation and configuration backup options.
SMTP configuration for email notifications and exports.
Web crawling and content extraction configuration.
Project-specific settings and workspace configuration.
Manage project files and directory settings.
Main workbench interface for model fine-tuning and customization.
Configure training parameters, LoRA settings, and hyperparameters.
Import and manage training datasets for fine-tuning.
Real-time training metrics and loss visualization.
Evaluate trained models with test prompts.
Export trained models in various formats.
Advanced training options and optimization settings.
Main training workbench with job queue and status.
Create a new training job with model and dataset selection.
Configure learning rate, batch size, and epochs.
Set LoRA rank, alpha, and target modules.
Live training loss and accuracy metrics.
Monitor GPU utilization and memory during training.
Save and load training checkpoints.
View past training runs and their results.
Compare training results across different runs.
Export and deploy the trained model.
Visual canvas for designing multi-agent workflows.
Drag and drop agents: Coder, Explorer, Planner, etc.
Configure individual agent settings and tools.
Connect agents and define data flow between them.
Run and monitor agent workflow execution.
View agent outputs and intermediate results.
Mixture of Experts configuration dashboard.
Define expert models and their specializations.
Configure the routing mechanism between experts.
Monitor and adjust expert load distribution.
Step-by-step wizard for merging models.
Choose merge method: SLERP, TIES, DARE, Linear.
Select source models and set weights.
Configure quantization and output settings.
Monitor merge operation progress.
Manage saved merge recipes for reuse.
Browse pre-configured model templates by category.
View template configuration and parameters.
Create new models from template configurations.
Rapidly create customized models with presets.
Choose base model from available options.
Define custom system prompt for the model.
Set temperature, context length, and other params.
Configure latent reasoning training with technique selection.
Chain-of-Continuous-Thought configuration.
Self-taught reasoning technique setup.
Generate training datasets for latent reasoning.
Test trained models with latent reasoning.
View and analyze latent vectors.
Monitor latent reasoning training metrics.
Built-in documentation for techniques.
Manage vector databases for RAG.
Import documents into the knowledge base.
Configure text chunking strategies.
Test semantic search queries.
Connect to SQLite, PostgreSQL, MySQL, or DB2.
Execute SQL queries and view results.
Integrated development environment for MCP servers.
Edit MCP server code with syntax highlighting.
Define tools with parameters and schemas.
Start from pre-built MCP templates.
Browse and manage saved prompts.
Create and edit prompts with variables.
Define dynamic variables for prompts.
Test prompts with different models.
Organize prompts by category.
Share prompts via import and export.
Create multi-step prompt chains.
Track prompt versions and changes.
Compare responses from multiple models.
Compare performance metrics across models.
Manage remote GPU training servers.
Configure SSH connections and credentials.
Monitor running background operations.
View detailed task progress and logs.