Interactive Chat
Learn how to engage in ongoing conversations with AI models using Umwelten's interactive chat feature.
Overview
Interactive chat allows you to have extended conversations with AI models, maintaining context across multiple exchanges. This is ideal for:
- Extended discussions: Complex topics that require multiple back-and-forth exchanges
- Iterative problem solving: Refining solutions through conversation
- Learning sessions: Educational interactions with persistent context
- Creative collaboration: Building ideas through ongoing dialogue
Getting Started
Basic Chat Sessions
Start an interactive conversation:
# Basic chat
pnpm run cli -- chat --provider ollama --model gemma3:latest
# Chat with premium model
pnpm run cli -- chat --provider google --model gemini-3-flash-preview
# Chat with tools enabled
pnpm run cli -- chat --provider openrouter --model openai/gpt-4o --tools calculator,statisticsProvider-Specific Examples
Google Models
# Fast and cost-effective chat
pnpm run cli -- chat --provider google --model gemini-3-flash-preview
# High-quality analytical chat
pnpm run cli -- chat --provider google --model gemini-2.5-pro-exp-03-25
# Vision-enabled chat
pnpm run cli -- chat --provider google --model gemini-3-flash-preview --file ./image.jpgOllama Models (Local)
# General conversation
pnpm run cli -- chat --provider ollama --model gemma3:12b
# Code-focused chat
pnpm run cli -- chat --provider ollama --model codestral:latest
# Vision chat
pnpm run cli -- chat --provider ollama --model qwen2.5vl:latest --file ./screenshot.pngOpenRouter Models
# Premium quality chat
pnpm run cli -- chat --provider openrouter --model openai/gpt-4o
# Analytical chat
pnpm run cli -- chat --provider openrouter --model anthropic/claude-3.7-sonnet:thinking
# Cost-effective chat
pnpm run cli -- chat --provider openrouter --model openai/gpt-4o-miniMiniMax Models
# Direct MiniMax chat
pnpm run cli -- chat --provider minimax --model MiniMax-M2.5
# Faster MiniMax low-latency chat
pnpm run cli -- chat --provider minimax --model MiniMax-M2.5-highspeedFireworks Models
# Discover available Fireworks model IDs first
pnpm run cli -- models --provider fireworks
# Then start chat with a Fireworks model
pnpm run cli -- chat --provider fireworks --model <fireworks-model-id>LM Studio (Local)
# Local model chat (ensure LM Studio server is running)
pnpm run cli -- chat --provider lmstudio --model mistralai/devstral-small-2505Chat Commands
Within a chat session, you can use special commands to control the conversation:
Basic Commands
/?: Show help and available commands/reset: Clear conversation history and start fresh/history: Display the conversation historyexitorquit: End the chat session
Memory Commands
/mem: Show memory facts (requires--memoryflag)/mem clear: Clear all stored memory facts/mem export: Export memory facts to a file
Advanced Commands
/system <message>: Update the system message/temperature <value>: Change the temperature setting/provider <name>: Switch to a different provider/model <name>: Switch to a different model
Enhanced Chat Features
Memory-Enabled Chat
Enable persistent memory to maintain context across sessions:
# Chat with memory for persistent facts
pnpm run cli -- chat --provider ollama --model gemma3:latest --memoryThe memory system automatically:
- Extracts important facts from conversations
- Maintains context across sessions
- Provides personalized responses based on learned information
- Builds knowledge over time about your preferences and needs
Memory Examples
# Start a memory-enabled chat
pnpm run cli -- chat --provider google --model gemini-3-flash-preview --memory
# During chat, the AI will remember:
> "I'm a software developer working on a React project"
> "My name is Alex and I prefer TypeScript over JavaScript"
> "I'm learning about microservices architecture"
# In future sessions, the AI will reference this information
> "Based on our previous conversations, I know you're working on a React project..."Tool-Enabled Chat
Enhance your chat with powerful tools:
# Chat with specific tools
pnpm run cli -- chat --provider openrouter --model gpt-4o --tools calculator,statistics
# Available tools (use 'umwelten tools list' to see all)
pnpm run cli -- chat --provider google --model gemini-3-flash-preview --tools web_search,file_analysisAvailable Tools
- calculator: Mathematical calculations and formulas
- statistics: Statistical analysis and data processing
- randomNumber: Generate random numbers within ranges
Tool Usage Examples
# Math-focused chat
pnpm run cli -- chat --provider openrouter --model gpt-4o --tools calculator
# Data analysis chat
pnpm run cli -- chat --provider google --model gemini-3-flash-preview --tools statistics
# Multi-tool chat
pnpm run cli -- chat --provider ollama --model qwen3:latest --tools calculator,statistics,randomNumberTool Demo
Test tool functionality:
# Interactive tool demo
pnpm run cli -- tools demo
# Custom demo
pnpm run cli -- tools demo --prompt "Calculate 15 + 27, then generate a random number"File Attachments in Chat
Start a chat with file context:
# Start chat with a document
pnpm run cli -- chat --provider google --model gemini-1.5-flash-latest --file ./document.pdf
# Start chat with an image
pnpm run cli -- chat --provider ollama --model qwen2.5vl:latest --file ./photo.jpg
# Start chat with multiple files
pnpm run cli -- chat --provider google --model gemini-3-flash-preview --file ./report.pdf --file ./data.csvFile Reference Examples
During chat, you can reference attached files:
> "Summarize the main points from the attached document"
> "What are the key findings in section 3 of the PDF?"
> "Analyze the trends shown in the attached spreadsheet"
> "Describe what you see in the image I shared"Advanced Chat Configuration
System Messages
Set the AI's role and behavior for the entire conversation:
# Technical expert role
pnpm run cli -- chat \
--provider google --model gemini-3-flash-preview \
--system "You are a senior software architect with expertise in distributed systems"
# Creative writing role
pnpm run cli -- chat \
--provider ollama --model gemma3:27b \
--system "You are a creative writer specializing in science fiction short stories"
# Educational role
pnpm run cli -- chat \
--provider openrouter --model anthropic/claude-3.7-sonnet:thinking \
--system "You are a patient teacher who explains complex concepts simply"Temperature Control
Adjust creativity and randomness for the conversation:
# Very focused and deterministic (0.0-0.3)
pnpm run cli -- chat \
--provider google --model gemini-3-flash-preview \
--temperature 0.1
# Balanced creativity (0.4-0.7)
pnpm run cli -- chat \
--provider ollama --model gemma3:12b \
--temperature 0.6
# Highly creative (0.8-2.0)
pnpm run cli -- chat \
--provider google --model gemini-3-flash-preview \
--temperature 1.5Timeout Settings
Set appropriate timeouts for different types of conversations:
# Quick responses (default: 30 seconds)
pnpm run cli -- chat --provider ollama --model gemma3:12b --timeout 30000
# Complex analysis (longer timeout)
pnpm run cli -- chat \
--provider google --model gemini-2.5-pro-exp-03-25 \
--timeout 60000
# Extended processing (very long timeout)
pnpm run cli -- chat \
--provider openrouter --model openai/gpt-4o \
--timeout 120000Use Cases and Examples
Educational Support
# Math tutoring session
pnpm run cli -- chat \
--provider google --model gemini-3-flash-preview \
--system "You are a math tutor who shows step-by-step solutions" \
--tools calculator
# Language learning
pnpm run cli -- chat \
--provider ollama --model gemma3:27b \
--system "You are a Spanish language tutor. Respond in Spanish and help me practice"
# Concept explanation
pnpm run cli -- chat \
--provider openrouter --model anthropic/claude-3.7-sonnet:thinking \
--system "You are a patient teacher explaining complex concepts simply"Creative Collaboration
# Story writing collaboration
pnpm run cli -- chat \
--provider ollama --model gemma3:27b \
--system "You are a creative writing partner. Help me develop characters and plot" \
--temperature 0.8
# Brainstorming session
pnpm run cli -- chat \
--provider google --model gemini-3-flash-preview \
--system "You are an innovation consultant. Help me brainstorm solutions" \
--temperature 0.9
# Design feedback
pnpm run cli -- chat \
--provider openrouter --model openai/gpt-4o \
--system "You are a UX designer. Provide feedback on my design ideas"Problem Solving
# Debugging session
pnpm run cli -- chat \
--provider ollama --model codestral:latest \
--system "You are a senior software engineer helping with debugging" \
--tools code_execution
# Business analysis
pnpm run cli -- chat \
--provider google --model gemini-2.5-pro-exp-03-25 \
--system "You are a business analyst. Help me analyze market opportunities" \
--tools web_search
# Research assistance
pnpm run cli -- chat \
--provider openrouter --model anthropic/claude-3.7-sonnet:thinking \
--system "You are a research assistant. Help me find and analyze information" \
--tools web_searchCode Development
# Code review session
pnpm run cli -- chat \
--provider ollama --model codestral:latest \
--system "You are a senior developer conducting a code review" \
--file ./my-code.js
# Architecture discussion
pnpm run cli -- chat \
--provider google --model gemini-3-flash-preview \
--system "You are a software architect. Help me design system architecture"
# Testing strategy
pnpm run cli -- chat \
--provider openrouter --model openai/gpt-4o \
--system "You are a QA engineer. Help me develop testing strategies"Best Practices
Conversation Management
- Start with context: Provide relevant background information early
- Be specific: Ask clear, focused questions
- Build on responses: Reference previous exchanges to maintain continuity
- Use commands effectively: Leverage chat commands for better control
Memory Usage
- Enable memory for long-term projects: Keeps context across sessions
- Review memory regularly: Use
/memto see what's been learned - Clear memory when needed: Use
/mem clearfor fresh starts - Export important facts: Use
/mem exportto save valuable information
Tool Integration
- Choose relevant tools: Select tools that match your use case
- Combine tools effectively: Use multiple tools for complex tasks
- Understand tool limitations: Know what each tool can and cannot do
- Provide context: Give tools the information they need to work effectively
Error Handling
- Use appropriate timeouts: Set longer timeouts for complex conversations
- Handle interruptions gracefully: Use
/resetif conversation gets stuck - Switch providers if needed: Use
/providerto try different options - Save important conversations: Export chat history for important discussions
Troubleshooting
Common Issues
- Conversation Context Loss: Use memory-enabled chat or
/historyto review - Slow Responses: Increase timeout values or switch to faster models
- Tool Failures: Check tool availability and provide necessary context
- Memory Issues: Use
/mem clearto reset memory if it becomes corrupted - Provider Errors: Switch providers or check API key configuration
Debug Commands
# Test chat functionality
pnpm run cli -- chat --provider google --model gemini-3-flash-preview --timeout 10000
# Check available tools
pnpm run cli -- tools list
# Test memory system
pnpm run cli -- chat --provider ollama --model gemma3:latest --memory
# Verify file attachments
pnpm run cli -- chat --provider google --model gemini-3-flash-preview --file ./test.txtNext Steps
- Learn about running single prompts for quick tasks
- Explore model evaluation for systematic testing
- Try batch processing for multiple files
- See structured output for data extraction