Meetily: Privacy-First Meeting Transcription with Ollama and Gemma 3 4B
Complete guide to setting up Meetily for offline meeting transcription and AI summarization using Ollama with Gemma 3 4B. No cloud required, 100% local processing.
Meetily: Privacy-First Meeting Transcription with Ollama and Gemma 3 4B
In an era where every meeting tool sends your data to the cloud, Meetily (formerly meeting-minutes) stands out as a 100% local, privacy-first AI meeting assistant.
No cloud. No subscriptions. No data leaving your machine.
This guide shows you how to set up Meetily with Ollama and Gemma 3 4B for completely offline meeting transcription and AI-powered summaries.
What is Meetily?
Meetily is an open-source AI meeting assistant built with Rust and Tauri that:
- Transcribes meetings in real-time using Whisper or Parakeet models
- Generates AI summaries using local LLMs via Ollama
- Captures audio from both microphone and system (screen sharing)
- Works completely offline - no internet connection needed
- Supports GPU acceleration (CUDA, Metal, Vulkan)
- Provides speaker diarization to distinguish between speakers
Key stats:
- β 9.4k+ GitHub stars
- π Privacy-first by design
- π 4x faster than standard Whisper
- π° Free & open source (MIT License)
Why Meetily Matters
The Privacy Problem
Most meeting AI tools create significant risks:
| Risk | Impact |
|---|---|
| Data breaches | Average cost: $4.4M (IBM 2024) |
| GDPR fines | β¬5.88 billion issued by 2025 |
| Legal issues | 400+ unlawful recording cases in California/year |
Cloud meeting tools promise convenience but deliver:
- Unclear data storage practices
- Potential unauthorized access
- Vendor lock-in
- Compliance nightmares
Meetily solves this:
- β Complete data sovereignty
- β Zero vendor lock-in
- β Full control over sensitive conversations
- β GDPR compliant by design
Architecture Overview
Meetily is built as a single, self-contained application:
βββββββββββββββββββββββββββββββββββββββββββ
β Meetily (Tauri App) β
β β
β ββββββββββββββββ ββββββββββββββββ β
β β Frontend β β Backend β β
β β (Next.js) βββββ€ (Rust) β β
β ββββββββββββββββ ββββββββ¬ββββββββ β
β β β
β ββββββββββββββββββββββββββββΌβββββββββ β
β β Audio Capture & Processing β β
β β - Microphone β β
β β - System audio β β
β β - Professional mixing β β
β ββββββββββββββββ¬βββββββββββββββββββββ β
β β β
β ββββββββββββββββΌβββββββββββββββββββββ β
β β Transcription Engine β β
β β - Whisper (OpenAI) β β
β β - Parakeet (NVIDIA) - 4x faster β β
β β - GPU Acceleration β β
β ββββββββββββββββ¬βββββββββββββββββββββ β
β β β
β ββββββββββββββββΌβββββββββββββββββββββ β
β β AI Summarization β β
β β - Ollama (local LLMs) β β
β β - Claude API (optional) β β
β β - Groq API (optional) β β
β β - Custom OpenAI endpoint β β
β βββββββββββββββββββββββββββββββββββββ β
βββββββββββββββββββββββββββββββββββββββββββ
β
βΌ
Local Storage
(Recordings, Transcripts, Summaries)
Installation
Prerequisites
Before installing Meetily, you need:
- Ollama (for AI summaries)
- Meetily application
- Gemma 3 4B model (optional, but recommended)
Step 1: Install Ollama
macOS/Linux:
curl -fsSL https://ollama.com/install.sh | sh
Windows: Download from ollama.com/download
Verify installation:
ollama --version
Step 2: Pull Gemma 3 4B Model
Gemma 3 4B is perfect for meeting summaries: fast, accurate, and small enough to run on modest hardware.
# Pull the model (one-time, ~2.5GB)
ollama pull gemma2:4b
# Verify it works
ollama run gemma2:4b
Alternative models:
| Model | Size | Speed | Quality | Use Case |
|---|---|---|---|---|
| gemma2:4b | 2.5GB | β‘β‘β‘ | βββ | Recommended - Best balance |
| llama3.2:3b | 2GB | β‘β‘β‘ | ββ | Fastest, lower quality |
| qwen2.5:7b | 4.7GB | β‘β‘ | ββββ | Higher quality, slower |
| llama3.3:70b | 43GB | β‘ | βββββ | Best quality, needs GPU |
Step 3: Install Meetily
πͺ Windows:
- Download
x64-setup.exefrom Releases - Right-click β Properties β Check Unblock β OK
- Run installer (Click More info β Run anyway if warned)
π macOS:
- Download
meetily_0.2.0_aarch64.dmgfrom Releases - Open
.dmgfile - Drag Meetily to Applications folder
- Open from Applications (may need to allow in System Settings β Privacy & Security)
π§ Linux:
Build from source:
# Clone repository
git clone https://github.com/Zackriya-Solutions/meeting-minutes
cd meeting-minutes/frontend
# Install dependencies
pnpm install
# Build with GPU support
./build-gpu.sh
# The built app will be in ../backend/target/release/
Configuration
Step 4: Configure Ollama in Meetily
-
Launch Meetily
-
Go to Settings (gear icon)
-
Configure AI Provider:
- Select "Ollama" as AI provider
- Endpoint:
http://localhost:11434(default) - Model:
gemma2:4b - Temperature:
0.7(balanced creativity)
-
Configure Transcription:
- Model: Parakeet (4x faster) or Whisper (more accurate)
- Language: Auto-detect or specify
- Enable GPU acceleration (if available)
-
Audio Settings:
- Microphone: Select your input device
- System Audio: Enable if capturing screen shares
- Audio Mixing: Automatic ducking and clipping prevention
Configuration File (Optional)
For advanced users, edit the config directly:
Location:
- Windows:
%APPDATA%\meetily\config.json - macOS:
~/Library/Application Support/meetily/config.json - Linux:
~/.config/meetily/config.json
Example config:
{
"ai_provider": "ollama",
"ollama": {
"endpoint": "http://localhost:11434",
"model": "gemma2:4b",
"temperature": 0.7,
"max_tokens": 2000
},
"transcription": {
"model": "parakeet",
"language": "auto",
"gpu_acceleration": true,
"real_time": true
},
"audio": {
"microphone_device": "default",
"system_audio_enabled": true,
"auto_ducking": true
}
}
Using Meetily: Complete Workflow
1. Start a Meeting
Launch Meetily β Click "New Meeting"
What happens:
- Microphone starts recording
- Real-time transcription begins
- Transcript appears live on screen
2. During the Meeting
Live transcript:
[00:01] Speaker 1: Let's discuss the Q1 roadmap.
[00:03] Speaker 2: I think we should prioritize the API redesign.
[00:07] Speaker 1: Agreed. What about the mobile app?
...
Controls:
- βΈοΈ Pause/Resume transcription
- π€ Mute/Unmute microphone
- πΎ Save current transcript
- β Stop meeting
3. Generate AI Summary
After the meeting ends (or during):
Click "Generate Summary"
Ollama processes locally:
Connecting to Ollama...
Using model: gemma2:4b
Analyzing transcript (1,243 words)...
Generating summary...
Example output:
## Meeting Summary
**Date:** January 21, 2026
**Duration:** 23 minutes
**Participants:** 3 speakers
### Key Discussion Points
1. **Q1 Roadmap Planning**
- Prioritize API redesign (due: Feb 15)
- Mobile app updates scheduled for March
- Backend optimization ongoing
2. **Action Items**
- [ ] Speaker 1: Draft API specification by Jan 25
- [ ] Speaker 2: Review mobile mockups
- [ ] Speaker 3: Conduct performance testing
3. **Decisions Made**
- Approved API v2 proposal
- Delayed feature X to Q2
- Budget increase for infrastructure
### Next Steps
- Weekly sync meetings starting Feb 1
- API documentation sprint next week
4. Export Options
Available formats:
- Markdown (
.md) - Default - Plain text (
.txt) - JSON (
.json) - Structured data - PDF (Meetily PRO)
- DOCX (Meetily PRO)
Export location:
- Windows:
Documents\Meetily\Meetings\ - macOS:
~/Documents/Meetily/Meetings/ - Linux:
~/Documents/Meetily/Meetings/
Advanced: Custom Summary Prompts
You can customize how Gemma 3 4B generates summaries by modifying the system prompt.
Default prompt:
You are a professional meeting assistant. Analyze the following
transcript and provide:
1. A brief summary (2-3 sentences)
2. Key discussion points
3. Action items with owners
4. Decisions made
5. Next steps
Format the output in Markdown.
Custom prompt for technical meetings:
You are a technical meeting assistant specialized in software
development. Analyze this engineering meeting transcript and provide:
1. Technical decisions made
2. Architecture discussions
3. Code review notes
4. Performance considerations
5. Action items with assigned engineers
6. Blocked items requiring resolution
Use technical terminology appropriately. Format in Markdown.
How to apply:
Edit config.json:
{
"ai_provider": "ollama",
"ollama": {
"model": "gemma2:4b",
"custom_prompt": "Your custom prompt here..."
}
}
Performance Optimization
GPU Acceleration
Automatic detection:
- macOS: Apple Silicon (Metal) + CoreML
- Windows/Linux: NVIDIA (CUDA), AMD/Intel (Vulkan)
Check if GPU is active:
# Ollama GPU check
ollama ps
# Should show: "GPU: NVIDIA GeForce RTX..." or "GPU: Apple M1/M2/M3"
Force GPU usage:
# Linux/Windows with NVIDIA
export CUDA_VISIBLE_DEVICES=0
# Run Ollama
ollama serve
Model Performance Benchmarks
Tested on various hardware:
| Hardware | Model | Transcription | Summary Generation |
|---|---|---|---|
| M1 MacBook | Parakeet + Gemma 3 4B | ~0.5x realtime | ~2 sec/1000 words |
| M3 MacBook Pro | Parakeet + Gemma 3 4B | ~0.3x realtime | ~1 sec/1000 words |
| RTX 3060 (12GB) | Parakeet + Gemma 3 4B | ~0.4x realtime | ~1.5 sec/1000 words |
| RTX 4090 | Whisper + Qwen 7B | ~0.2x realtime | ~0.5 sec/1000 words |
| CPU Only (i7) | Whisper Tiny + Gemma 4B | ~2x realtime | ~8 sec/1000 words |
Recommendations:
- Minimum: 8GB RAM, decent CPU
- Recommended: 16GB RAM, GPU with 4GB+ VRAM
- Optimal: 32GB RAM, GPU with 8GB+ VRAM
Memory Usage
| Component | RAM Usage | VRAM Usage |
|---|---|---|
| Meetily App | ~200MB | - |
| Parakeet Transcription | ~1GB | ~2GB (GPU) |
| Ollama (idle) | ~500MB | - |
| Gemma 3 4B (active) | ~3GB | ~2.5GB (GPU) |
| Total (GPU) | ~4.7GB | ~4.5GB |
| Total (CPU only) | ~8GB | N/A |
Troubleshooting
Ollama Connection Issues
Problem: "Failed to connect to Ollama"
Solutions:
-
Check if Ollama is running:
# Check status curl http://localhost:11434/api/tags # Should return list of models -
Restart Ollama:
# macOS/Linux killall ollama ollama serve # Windows # Close Ollama from system tray, then reopen -
Verify model is pulled:
ollama list # Should show: gemma2:4b -
Check firewall:
- Allow port
11434in firewall - Windows:
netsh advfirewall firewall add rule name="Ollama" dir=in action=allow protocol=TCP localport=11434
- Allow port
Transcription Issues
Problem: "Slow or delayed transcription"
Solutions:
-
Enable GPU acceleration:
- Check Settings β Transcription β GPU Acceleration
- Rebuild with GPU support (Linux):
./build-gpu.sh
-
Use faster model:
- Switch from Whisper to Parakeet (4x faster)
- Settings β Transcription β Model: Parakeet
-
Lower audio quality:
- Reduce sample rate to 16kHz
- Settings β Audio β Sample Rate: 16000
Problem: "Microphone not detected"
Solutions:
-
Grant permissions:
- macOS: System Settings β Privacy & Security β Microphone β Enable Meetily
- Windows: Settings β Privacy β Microphone β Allow apps
-
Select correct device:
- Settings β Audio β Microphone β Choose your device
-
Test audio:
# Test microphone recording (Linux) arecord -l # List devices arecord -D hw:0,0 -d 5 test.wav # Record 5 seconds
Summary Generation Issues
Problem: "Summary is low quality or off-topic"
Solutions:
-
Use better model:
# Pull higher quality model ollama pull qwen2.5:7b # Update Meetily settings to use qwen2.5:7b -
Adjust temperature:
- Lower temperature (0.3-0.5): More factual, less creative
- Higher temperature (0.8-1.0): More creative, less precise
- Recommended: 0.7
-
Improve transcript quality:
- Ensure clear audio (reduce background noise)
- Use better microphone
- Speak clearly and at moderate pace
-
Custom prompt:
- Add specific instructions to system prompt
- Example: "Focus on action items" or "Emphasize technical details"
Comparison: Meetily vs Alternatives
| Feature | Meetily | Otter.ai | Fireflies.ai | Grain |
|---|---|---|---|---|
| Privacy | β 100% Local | β Cloud | β Cloud | β Cloud |
| Cost | β Free (OSS) | $16.99/mo | $10/mo | $15/mo |
| Offline | β Yes | β No | β No | β No |
| GPU Support | β Yes | N/A | N/A | N/A |
| Self-Hosted | β Yes | β No | β No | β No |
| Open Source | β MIT | β Proprietary | β Proprietary | β Proprietary |
| Custom Models | β Yes (Ollama) | β No | β No | β No |
| Speaker ID | β Yes | β Yes | β Yes | β Yes |
| Real-time | β Yes | β Yes | β οΈ Post-process | β Yes |
| GDPR Ready | β Yes | β οΈ Depends | β οΈ Depends | β οΈ Depends |
When to choose Meetily:
- β You need complete privacy/data sovereignty
- β You have sensitive conversations (legal, medical, defense)
- β You want offline capability
- β You prefer open source
- β You have GPU hardware
When alternatives might work:
- β You need calendar integration (coming to Meetily PRO)
- β You want zero setup
- β You need phone app recording
- β You have no local GPU
Meetily PRO vs Community Edition
Community Edition (Free Forever)
What you get:
- β Real-time transcription (Whisper/Parakeet)
- β AI summaries (Ollama support)
- β Speaker diarization
- β GPU acceleration
- β Offline functionality
- β Markdown/JSON export
- β Basic audio mixing
Perfect for:
- Individual users
- Personal meetings
- Learning/experimentation
- Privacy-conscious professionals
Meetily PRO ($19/mo)
Additional features:
- β Enhanced accuracy - Superior transcription models
- β Custom summary templates - Tailored to your workflow
- β Advanced exports - PDF, DOCX with formatting
- β Auto-detect meetings - Automatic detection and joining
- β Calendar integration - Sync with Google/Outlook
- β Chat with meetings - AI-powered insights
- β Self-hosted deployment - For teams (2-100 users)
- β GDPR compliance tools - Audit trails, data controls
- β Priority support - Direct access to team
Perfect for:
- Professionals with critical meetings
- Small teams (2-100 users)
- Compliance-focused organizations
- Power users needing advanced workflows
Real-World Use Cases
1. Legal Consultations
Challenge: Attorney-client privilege requires absolute confidentiality.
Solution:
Meetily + Ollama (no internet) + encrypted storage
Benefits:
- Zero risk of cloud leaks
- Complete audit trail
- Detailed transcripts for case files
- Summaries with action items
2. Medical Consultations
Challenge: HIPAA compliance prohibits cloud processing of patient data.
Solution:
Meetily (air-gapped machine) + custom prompts for medical terminology
Benefits:
- HIPAA compliant by design
- Accurate medical transcription
- Structured patient notes
- No third-party access
3. Defense/Government Contractors
Challenge: Classified discussions cannot leave secure network.
Solution:
Meetily on SCIF-approved hardware + Ollama with vetted models
Benefits:
- Meets security clearance requirements
- No external network calls
- Full data sovereignty
- Auditable processing
4. Executive Board Meetings
Challenge: Strategic discussions require confidentiality.
Solution:
Meetily + Gemma 3 4B + custom summary template
Benefits:
- No board leaks
- Detailed minutes generation
- Action item tracking
- Searchable history
5. Remote Team Standups
Challenge: Daily meetings need quick summaries, not full transcripts.
Solution:
Meetily + Fast model (Parakeet + Gemma 4B) + "standup" template
Benefits:
- < 1 minute summary generation
- Structured: Yesterday/Today/Blockers
- Shareable with team
- Historical standup archive
Tips for Best Results
1. Optimize Audio Quality
Microphone setup:
β
Use dedicated USB microphone (Blue Yeti, Rode NT-USB)
β
Position 6-8 inches from mouth
β
Use pop filter to reduce plosives
β
Record in quiet room (< 40dB ambient noise)
β Don't use laptop built-in mic in noisy environment
System audio (for remote meetings):
β
Enable "system audio" capture in Meetily
β
Use headphones to prevent echo/feedback
β
Adjust ducking to balance voices
β Don't max out volume (causes clipping)
2. Improve Transcription Accuracy
Speaking tips:
- β Speak clearly at moderate pace
- β Avoid talking over each other
- β State names when switching speakers
- β Spell out acronyms first time: "API (A-P-I)"
- β Don't whisper or speak too fast
Technical terms:
Create custom vocabulary file:
vocabulary.txt:
- XandAI (proper noun)
- Kubernetes (K8s)
- PostgreSQL (database)
3. Better AI Summaries
Prompt engineering:
For technical meetings:
"Summarize this software engineering meeting. Focus on:
1. Technical decisions and rationale
2. Code review feedback
3. Performance metrics discussed
4. Architecture changes
5. Action items with assigned engineers"
For sales calls:
"Summarize this sales call. Include:
1. Client pain points mentioned
2. Proposed solutions
3. Pricing discussion
4. Next steps and follow-up actions
5. Decision timeline"
Model selection:
| Meeting Type | Best Model | Why |
|---|---|---|
| Quick standups | gemma2:4b | Fast, good enough |
| Important 1:1s | qwen2.5:7b | Better context understanding |
| Board meetings | llama3.3:70b | Highest quality, nuanced |
| Technical deep-dives | deepseek-coder:7b | Code-aware |
Security & Privacy
Data Storage
What Meetily stores locally:
~/Documents/Meetily/
βββ Recordings/
β βββ 2026-01-21_meeting.wav (audio file)
β βββ ...
βββ Transcripts/
β βββ 2026-01-21_meeting.json (structured transcript)
β βββ 2026-01-21_meeting.md (readable format)
βββ Summaries/
β βββ 2026-01-21_meeting_summary.md
βββ config.json (settings)
Encryption (optional):
# Encrypt meeting directory with VeraCrypt or similar
# Windows: BitLocker
# macOS: FileVault
# Linux: LUKS
# Example with GPG:
gpg --encrypt --recipient your@email.com meeting.wav
Network Activity
What Meetily sends:
| Connection | Purpose | Can Disable? |
|---|---|---|
localhost:11434 | Ollama API (local) | β Required for summaries |
| Nothing else | - | β No external connections |
Verify with network monitoring:
# Linux
sudo tcpdump -i any port not 11434
# macOS
sudo tcpdump -i en0 port not 11434
# Should show ZERO traffic from Meetily
Compliance
GDPR:
- β Data minimization (only audio + transcript)
- β Right to erasure (delete files)
- β Data portability (standard formats)
- β Processing transparency (open source)
- β No third-party processors
HIPAA:
- β No ePHI transmission
- β Local storage only
- β Audit logging (file system)
- β Access controls (OS-level)
SOC 2:
- β Secure development (open source review)
- β Availability (offline-capable)
- β Confidentiality (no cloud)
Building from Source
For developers who want to customize or contribute:
Prerequisites
# Install Rust
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
# Install Node.js (v18+)
# macOS
brew install node
# Install pnpm
npm install -g pnpm
# Install platform-specific dependencies
# macOS
brew install cmake
# Linux
sudo apt-get install cmake libssl-dev pkg-config
Build Steps
# Clone repository
git clone https://github.com/Zackriya-Solutions/meeting-minutes
cd meeting-minutes
# Install frontend dependencies
cd frontend
pnpm install
# Build with GPU support
./build-gpu.sh
# Or build without GPU
./build.sh
# Built app location:
# macOS: ../backend/target/release/bundle/macos/Meetily.app
# Windows: ../backend/target/release/meetily.exe
# Linux: ../backend/target/release/meetily
Development Mode
# Run in dev mode with hot reload
cd frontend
pnpm dev
# In another terminal, run Rust backend
cd ../backend
cargo run
Contributing
Meetily is open source and welcomes contributions!
Ways to contribute:
- π Report bugs
- π‘ Suggest features
- π Improve documentation
- π§ Submit pull requests
- β Star the repository
Contribution areas:
- Transcription accuracy improvements
- New LLM integrations
- UI/UX enhancements
- Performance optimizations
- Platform support (iOS, Android)
Roadmap
Upcoming features:
- π― Speaker identification - Better diarization
- π± Mobile apps - iOS and Android
- π Calendar integration - Auto-join meetings
- π¬ Chat with meetings - Ask questions about past meetings
- π Multi-language - Better international support
- π¨ Custom themes - Dark mode, high contrast
- π Analytics - Meeting insights and trends
Conclusion
Meetily + Ollama + Gemma 3 4B gives you:
β
Complete privacy - No cloud, no tracking
β
Zero cost - Free, open source
β
Offline capability - Works without internet
β
Professional quality - 4x faster than Whisper
β
AI summaries - Intelligent meeting insights
β
Full control - Self-hosted, customizable
Perfect for:
- Privacy-conscious professionals
- Regulated industries (legal, medical, defense)
- Teams wanting data sovereignty
- Anyone tired of subscription fees
Get started:
- Install Ollama:
curl -fsSL https://ollama.com/install.sh | sh - Pull model:
ollama pull gemma2:4b - Download Meetily: GitHub Releases
- Start transcribing: Launch app β New Meeting
No cloud. No subscriptions. No compromise.
Resources
- Meetily GitHub: github.com/Zackriya-Solutions/meeting-minutes
- Meetily Website: meetily.ai
- Ollama: ollama.com
- Gemma Models: ollama.com/library/gemma2
- Discord Community: Meetily Discord
- Documentation: docs.meetily.ai
Meetily is free and open source (MIT License). Star the project on GitHub to support development! Your meeting data belongs to you.