β€’17 min read

Meetily: Privacy-First Meeting Transcription with Ollama and Gemma 3 4B

Complete guide to setting up Meetily for offline meeting transcription and AI summarization using Ollama with Gemma 3 4B. No cloud required, 100% local processing.

AIPrivacyOllamaSelf-HostedTranscriptionMeetings

Meetily: Privacy-First Meeting Transcription with Ollama and Gemma 3 4B

In an era where every meeting tool sends your data to the cloud, Meetily (formerly meeting-minutes) stands out as a 100% local, privacy-first AI meeting assistant.

No cloud. No subscriptions. No data leaving your machine.

This guide shows you how to set up Meetily with Ollama and Gemma 3 4B for completely offline meeting transcription and AI-powered summaries.

What is Meetily?

Meetily is an open-source AI meeting assistant built with Rust and Tauri that:

  • Transcribes meetings in real-time using Whisper or Parakeet models
  • Generates AI summaries using local LLMs via Ollama
  • Captures audio from both microphone and system (screen sharing)
  • Works completely offline - no internet connection needed
  • Supports GPU acceleration (CUDA, Metal, Vulkan)
  • Provides speaker diarization to distinguish between speakers

Key stats:

  • ⭐ 9.4k+ GitHub stars
  • πŸ”’ Privacy-first by design
  • πŸš€ 4x faster than standard Whisper
  • πŸ’° Free & open source (MIT License)

Why Meetily Matters

The Privacy Problem

Most meeting AI tools create significant risks:

RiskImpact
Data breachesAverage cost: $4.4M (IBM 2024)
GDPR fines€5.88 billion issued by 2025
Legal issues400+ unlawful recording cases in California/year

Cloud meeting tools promise convenience but deliver:

  • Unclear data storage practices
  • Potential unauthorized access
  • Vendor lock-in
  • Compliance nightmares

Meetily solves this:

  • βœ… Complete data sovereignty
  • βœ… Zero vendor lock-in
  • βœ… Full control over sensitive conversations
  • βœ… GDPR compliant by design

Architecture Overview

Meetily is built as a single, self-contained application:

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚         Meetily (Tauri App)             β”‚
β”‚                                         β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”   β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”‚
β”‚  β”‚   Frontend   β”‚   β”‚   Backend    β”‚  β”‚
β”‚  β”‚  (Next.js)   │◄───   (Rust)     β”‚  β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜   β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”˜  β”‚
β”‚                             β”‚           β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β” β”‚
β”‚  β”‚  Audio Capture & Processing      β”‚ β”‚
β”‚  β”‚  - Microphone                     β”‚ β”‚
β”‚  β”‚  - System audio                   β”‚ β”‚
β”‚  β”‚  - Professional mixing            β”‚ β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚
β”‚                 β”‚                       β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚
β”‚  β”‚  Transcription Engine             β”‚ β”‚
β”‚  β”‚  - Whisper (OpenAI)               β”‚ β”‚
β”‚  β”‚  - Parakeet (NVIDIA) - 4x faster  β”‚ β”‚
β”‚  β”‚  - GPU Acceleration               β”‚ β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚
β”‚                 β”‚                       β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚
β”‚  β”‚  AI Summarization                 β”‚ β”‚
β”‚  β”‚  - Ollama (local LLMs)            β”‚ β”‚
β”‚  β”‚  - Claude API (optional)          β”‚ β”‚
β”‚  β”‚  - Groq API (optional)            β”‚ β”‚
β”‚  β”‚  - Custom OpenAI endpoint         β”‚ β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
         β”‚
         β–Ό
    Local Storage
    (Recordings, Transcripts, Summaries)

Installation

Prerequisites

Before installing Meetily, you need:

  1. Ollama (for AI summaries)
  2. Meetily application
  3. Gemma 3 4B model (optional, but recommended)

Step 1: Install Ollama

macOS/Linux:

curl -fsSL https://ollama.com/install.sh | sh

Windows: Download from ollama.com/download

Verify installation:

ollama --version

Step 2: Pull Gemma 3 4B Model

Gemma 3 4B is perfect for meeting summaries: fast, accurate, and small enough to run on modest hardware.

# Pull the model (one-time, ~2.5GB)
ollama pull gemma2:4b

# Verify it works
ollama run gemma2:4b

Alternative models:

ModelSizeSpeedQualityUse Case
gemma2:4b2.5GB⚑⚑⚑⭐⭐⭐Recommended - Best balance
llama3.2:3b2GB⚑⚑⚑⭐⭐Fastest, lower quality
qwen2.5:7b4.7GB⚑⚑⭐⭐⭐⭐Higher quality, slower
llama3.3:70b43GB⚑⭐⭐⭐⭐⭐Best quality, needs GPU

Step 3: Install Meetily

πŸͺŸ Windows:

  1. Download x64-setup.exe from Releases
  2. Right-click β†’ Properties β†’ Check Unblock β†’ OK
  3. Run installer (Click More info β†’ Run anyway if warned)

🍎 macOS:

  1. Download meetily_0.2.0_aarch64.dmg from Releases
  2. Open .dmg file
  3. Drag Meetily to Applications folder
  4. Open from Applications (may need to allow in System Settings β†’ Privacy & Security)

🐧 Linux:

Build from source:

# Clone repository
git clone https://github.com/Zackriya-Solutions/meeting-minutes
cd meeting-minutes/frontend

# Install dependencies
pnpm install

# Build with GPU support
./build-gpu.sh

# The built app will be in ../backend/target/release/

Configuration

Step 4: Configure Ollama in Meetily

  1. Launch Meetily

  2. Go to Settings (gear icon)

  3. Configure AI Provider:

    • Select "Ollama" as AI provider
    • Endpoint: http://localhost:11434 (default)
    • Model: gemma2:4b
    • Temperature: 0.7 (balanced creativity)
  4. Configure Transcription:

    • Model: Parakeet (4x faster) or Whisper (more accurate)
    • Language: Auto-detect or specify
    • Enable GPU acceleration (if available)
  5. Audio Settings:

    • Microphone: Select your input device
    • System Audio: Enable if capturing screen shares
    • Audio Mixing: Automatic ducking and clipping prevention

Configuration File (Optional)

For advanced users, edit the config directly:

Location:

  • Windows: %APPDATA%\meetily\config.json
  • macOS: ~/Library/Application Support/meetily/config.json
  • Linux: ~/.config/meetily/config.json

Example config:

{
  "ai_provider": "ollama",
  "ollama": {
    "endpoint": "http://localhost:11434",
    "model": "gemma2:4b",
    "temperature": 0.7,
    "max_tokens": 2000
  },
  "transcription": {
    "model": "parakeet",
    "language": "auto",
    "gpu_acceleration": true,
    "real_time": true
  },
  "audio": {
    "microphone_device": "default",
    "system_audio_enabled": true,
    "auto_ducking": true
  }
}

Using Meetily: Complete Workflow

1. Start a Meeting

Launch Meetily β†’ Click "New Meeting"

What happens:

  • Microphone starts recording
  • Real-time transcription begins
  • Transcript appears live on screen

2. During the Meeting

Live transcript:

[00:01] Speaker 1: Let's discuss the Q1 roadmap.
[00:03] Speaker 2: I think we should prioritize the API redesign.
[00:07] Speaker 1: Agreed. What about the mobile app?
...

Controls:

  • ⏸️ Pause/Resume transcription
  • 🎀 Mute/Unmute microphone
  • πŸ’Ύ Save current transcript
  • ❌ Stop meeting

3. Generate AI Summary

After the meeting ends (or during):

Click "Generate Summary"

Ollama processes locally:

Connecting to Ollama...
Using model: gemma2:4b
Analyzing transcript (1,243 words)...
Generating summary...

Example output:

## Meeting Summary

**Date:** January 21, 2026
**Duration:** 23 minutes
**Participants:** 3 speakers

### Key Discussion Points

1. **Q1 Roadmap Planning**
   - Prioritize API redesign (due: Feb 15)
   - Mobile app updates scheduled for March
   - Backend optimization ongoing

2. **Action Items**
   - [ ] Speaker 1: Draft API specification by Jan 25
   - [ ] Speaker 2: Review mobile mockups
   - [ ] Speaker 3: Conduct performance testing

3. **Decisions Made**
   - Approved API v2 proposal
   - Delayed feature X to Q2
   - Budget increase for infrastructure

### Next Steps
- Weekly sync meetings starting Feb 1
- API documentation sprint next week

4. Export Options

Available formats:

  • Markdown (.md) - Default
  • Plain text (.txt)
  • JSON (.json) - Structured data
  • PDF (Meetily PRO)
  • DOCX (Meetily PRO)

Export location:

  • Windows: Documents\Meetily\Meetings\
  • macOS: ~/Documents/Meetily/Meetings/
  • Linux: ~/Documents/Meetily/Meetings/

Advanced: Custom Summary Prompts

You can customize how Gemma 3 4B generates summaries by modifying the system prompt.

Default prompt:

You are a professional meeting assistant. Analyze the following 
transcript and provide:

1. A brief summary (2-3 sentences)
2. Key discussion points
3. Action items with owners
4. Decisions made
5. Next steps

Format the output in Markdown.

Custom prompt for technical meetings:

You are a technical meeting assistant specialized in software 
development. Analyze this engineering meeting transcript and provide:

1. Technical decisions made
2. Architecture discussions
3. Code review notes
4. Performance considerations
5. Action items with assigned engineers
6. Blocked items requiring resolution

Use technical terminology appropriately. Format in Markdown.

How to apply:

Edit config.json:

{
  "ai_provider": "ollama",
  "ollama": {
    "model": "gemma2:4b",
    "custom_prompt": "Your custom prompt here..."
  }
}

Performance Optimization

GPU Acceleration

Automatic detection:

  • macOS: Apple Silicon (Metal) + CoreML
  • Windows/Linux: NVIDIA (CUDA), AMD/Intel (Vulkan)

Check if GPU is active:

# Ollama GPU check
ollama ps

# Should show: "GPU: NVIDIA GeForce RTX..." or "GPU: Apple M1/M2/M3"

Force GPU usage:

# Linux/Windows with NVIDIA
export CUDA_VISIBLE_DEVICES=0

# Run Ollama
ollama serve

Model Performance Benchmarks

Tested on various hardware:

HardwareModelTranscriptionSummary Generation
M1 MacBookParakeet + Gemma 3 4B~0.5x realtime~2 sec/1000 words
M3 MacBook ProParakeet + Gemma 3 4B~0.3x realtime~1 sec/1000 words
RTX 3060 (12GB)Parakeet + Gemma 3 4B~0.4x realtime~1.5 sec/1000 words
RTX 4090Whisper + Qwen 7B~0.2x realtime~0.5 sec/1000 words
CPU Only (i7)Whisper Tiny + Gemma 4B~2x realtime~8 sec/1000 words

Recommendations:

  • Minimum: 8GB RAM, decent CPU
  • Recommended: 16GB RAM, GPU with 4GB+ VRAM
  • Optimal: 32GB RAM, GPU with 8GB+ VRAM

Memory Usage

ComponentRAM UsageVRAM Usage
Meetily App~200MB-
Parakeet Transcription~1GB~2GB (GPU)
Ollama (idle)~500MB-
Gemma 3 4B (active)~3GB~2.5GB (GPU)
Total (GPU)~4.7GB~4.5GB
Total (CPU only)~8GBN/A

Troubleshooting

Ollama Connection Issues

Problem: "Failed to connect to Ollama"

Solutions:

  1. Check if Ollama is running:

    # Check status
    curl http://localhost:11434/api/tags
    
    # Should return list of models
    
  2. Restart Ollama:

    # macOS/Linux
    killall ollama
    ollama serve
    
    # Windows
    # Close Ollama from system tray, then reopen
    
  3. Verify model is pulled:

    ollama list
    # Should show: gemma2:4b
    
  4. Check firewall:

    • Allow port 11434 in firewall
    • Windows: netsh advfirewall firewall add rule name="Ollama" dir=in action=allow protocol=TCP localport=11434

Transcription Issues

Problem: "Slow or delayed transcription"

Solutions:

  1. Enable GPU acceleration:

    • Check Settings β†’ Transcription β†’ GPU Acceleration
    • Rebuild with GPU support (Linux): ./build-gpu.sh
  2. Use faster model:

    • Switch from Whisper to Parakeet (4x faster)
    • Settings β†’ Transcription β†’ Model: Parakeet
  3. Lower audio quality:

    • Reduce sample rate to 16kHz
    • Settings β†’ Audio β†’ Sample Rate: 16000

Problem: "Microphone not detected"

Solutions:

  1. Grant permissions:

    • macOS: System Settings β†’ Privacy & Security β†’ Microphone β†’ Enable Meetily
    • Windows: Settings β†’ Privacy β†’ Microphone β†’ Allow apps
  2. Select correct device:

    • Settings β†’ Audio β†’ Microphone β†’ Choose your device
  3. Test audio:

    # Test microphone recording (Linux)
    arecord -l  # List devices
    arecord -D hw:0,0 -d 5 test.wav  # Record 5 seconds
    

Summary Generation Issues

Problem: "Summary is low quality or off-topic"

Solutions:

  1. Use better model:

    # Pull higher quality model
    ollama pull qwen2.5:7b
    
    # Update Meetily settings to use qwen2.5:7b
    
  2. Adjust temperature:

    • Lower temperature (0.3-0.5): More factual, less creative
    • Higher temperature (0.8-1.0): More creative, less precise
    • Recommended: 0.7
  3. Improve transcript quality:

    • Ensure clear audio (reduce background noise)
    • Use better microphone
    • Speak clearly and at moderate pace
  4. Custom prompt:

    • Add specific instructions to system prompt
    • Example: "Focus on action items" or "Emphasize technical details"

Comparison: Meetily vs Alternatives

FeatureMeetilyOtter.aiFireflies.aiGrain
Privacyβœ… 100% Local❌ Cloud❌ Cloud❌ Cloud
Costβœ… Free (OSS)$16.99/mo$10/mo$15/mo
Offlineβœ… Yes❌ No❌ No❌ No
GPU Supportβœ… YesN/AN/AN/A
Self-Hostedβœ… Yes❌ No❌ No❌ No
Open Sourceβœ… MIT❌ Proprietary❌ Proprietary❌ Proprietary
Custom Modelsβœ… Yes (Ollama)❌ No❌ No❌ No
Speaker IDβœ… Yesβœ… Yesβœ… Yesβœ… Yes
Real-timeβœ… Yesβœ… Yes⚠️ Post-processβœ… Yes
GDPR Readyβœ… Yes⚠️ Depends⚠️ Depends⚠️ Depends

When to choose Meetily:

  • βœ… You need complete privacy/data sovereignty
  • βœ… You have sensitive conversations (legal, medical, defense)
  • βœ… You want offline capability
  • βœ… You prefer open source
  • βœ… You have GPU hardware

When alternatives might work:

  • ❌ You need calendar integration (coming to Meetily PRO)
  • ❌ You want zero setup
  • ❌ You need phone app recording
  • ❌ You have no local GPU

Meetily PRO vs Community Edition

Community Edition (Free Forever)

What you get:

  • βœ… Real-time transcription (Whisper/Parakeet)
  • βœ… AI summaries (Ollama support)
  • βœ… Speaker diarization
  • βœ… GPU acceleration
  • βœ… Offline functionality
  • βœ… Markdown/JSON export
  • βœ… Basic audio mixing

Perfect for:

  • Individual users
  • Personal meetings
  • Learning/experimentation
  • Privacy-conscious professionals

Meetily PRO ($19/mo)

Additional features:

  • βœ… Enhanced accuracy - Superior transcription models
  • βœ… Custom summary templates - Tailored to your workflow
  • βœ… Advanced exports - PDF, DOCX with formatting
  • βœ… Auto-detect meetings - Automatic detection and joining
  • βœ… Calendar integration - Sync with Google/Outlook
  • βœ… Chat with meetings - AI-powered insights
  • βœ… Self-hosted deployment - For teams (2-100 users)
  • βœ… GDPR compliance tools - Audit trails, data controls
  • βœ… Priority support - Direct access to team

Perfect for:

  • Professionals with critical meetings
  • Small teams (2-100 users)
  • Compliance-focused organizations
  • Power users needing advanced workflows

Learn more about PRO β†’

Real-World Use Cases

Challenge: Attorney-client privilege requires absolute confidentiality.

Solution:

Meetily + Ollama (no internet) + encrypted storage

Benefits:

  • Zero risk of cloud leaks
  • Complete audit trail
  • Detailed transcripts for case files
  • Summaries with action items

2. Medical Consultations

Challenge: HIPAA compliance prohibits cloud processing of patient data.

Solution:

Meetily (air-gapped machine) + custom prompts for medical terminology

Benefits:

  • HIPAA compliant by design
  • Accurate medical transcription
  • Structured patient notes
  • No third-party access

3. Defense/Government Contractors

Challenge: Classified discussions cannot leave secure network.

Solution:

Meetily on SCIF-approved hardware + Ollama with vetted models

Benefits:

  • Meets security clearance requirements
  • No external network calls
  • Full data sovereignty
  • Auditable processing

4. Executive Board Meetings

Challenge: Strategic discussions require confidentiality.

Solution:

Meetily + Gemma 3 4B + custom summary template

Benefits:

  • No board leaks
  • Detailed minutes generation
  • Action item tracking
  • Searchable history

5. Remote Team Standups

Challenge: Daily meetings need quick summaries, not full transcripts.

Solution:

Meetily + Fast model (Parakeet + Gemma 4B) + "standup" template

Benefits:

  • < 1 minute summary generation
  • Structured: Yesterday/Today/Blockers
  • Shareable with team
  • Historical standup archive

Tips for Best Results

1. Optimize Audio Quality

Microphone setup:

βœ… Use dedicated USB microphone (Blue Yeti, Rode NT-USB)
βœ… Position 6-8 inches from mouth
βœ… Use pop filter to reduce plosives
βœ… Record in quiet room (< 40dB ambient noise)
❌ Don't use laptop built-in mic in noisy environment

System audio (for remote meetings):

βœ… Enable "system audio" capture in Meetily
βœ… Use headphones to prevent echo/feedback
βœ… Adjust ducking to balance voices
❌ Don't max out volume (causes clipping)

2. Improve Transcription Accuracy

Speaking tips:

  • βœ… Speak clearly at moderate pace
  • βœ… Avoid talking over each other
  • βœ… State names when switching speakers
  • βœ… Spell out acronyms first time: "API (A-P-I)"
  • ❌ Don't whisper or speak too fast

Technical terms:

Create custom vocabulary file:
vocabulary.txt:
  - XandAI (proper noun)
  - Kubernetes (K8s)
  - PostgreSQL (database)

3. Better AI Summaries

Prompt engineering:

For technical meetings:
"Summarize this software engineering meeting. Focus on:
1. Technical decisions and rationale
2. Code review feedback
3. Performance metrics discussed
4. Architecture changes
5. Action items with assigned engineers"

For sales calls:
"Summarize this sales call. Include:
1. Client pain points mentioned
2. Proposed solutions
3. Pricing discussion
4. Next steps and follow-up actions
5. Decision timeline"

Model selection:

Meeting TypeBest ModelWhy
Quick standupsgemma2:4bFast, good enough
Important 1:1sqwen2.5:7bBetter context understanding
Board meetingsllama3.3:70bHighest quality, nuanced
Technical deep-divesdeepseek-coder:7bCode-aware

Security & Privacy

Data Storage

What Meetily stores locally:

~/Documents/Meetily/
β”œβ”€β”€ Recordings/
β”‚   β”œβ”€β”€ 2026-01-21_meeting.wav (audio file)
β”‚   └── ...
β”œβ”€β”€ Transcripts/
β”‚   β”œβ”€β”€ 2026-01-21_meeting.json (structured transcript)
β”‚   └── 2026-01-21_meeting.md (readable format)
β”œβ”€β”€ Summaries/
β”‚   └── 2026-01-21_meeting_summary.md
└── config.json (settings)

Encryption (optional):

# Encrypt meeting directory with VeraCrypt or similar
# Windows: BitLocker
# macOS: FileVault
# Linux: LUKS

# Example with GPG:
gpg --encrypt --recipient your@email.com meeting.wav

Network Activity

What Meetily sends:

ConnectionPurposeCan Disable?
localhost:11434Ollama API (local)❌ Required for summaries
Nothing else-βœ… No external connections

Verify with network monitoring:

# Linux
sudo tcpdump -i any port not 11434

# macOS
sudo tcpdump -i en0 port not 11434

# Should show ZERO traffic from Meetily

Compliance

GDPR:

  • βœ… Data minimization (only audio + transcript)
  • βœ… Right to erasure (delete files)
  • βœ… Data portability (standard formats)
  • βœ… Processing transparency (open source)
  • βœ… No third-party processors

HIPAA:

  • βœ… No ePHI transmission
  • βœ… Local storage only
  • βœ… Audit logging (file system)
  • βœ… Access controls (OS-level)

SOC 2:

  • βœ… Secure development (open source review)
  • βœ… Availability (offline-capable)
  • βœ… Confidentiality (no cloud)

Building from Source

For developers who want to customize or contribute:

Prerequisites

# Install Rust
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh

# Install Node.js (v18+)
# macOS
brew install node

# Install pnpm
npm install -g pnpm

# Install platform-specific dependencies
# macOS
brew install cmake

# Linux
sudo apt-get install cmake libssl-dev pkg-config

Build Steps

# Clone repository
git clone https://github.com/Zackriya-Solutions/meeting-minutes
cd meeting-minutes

# Install frontend dependencies
cd frontend
pnpm install

# Build with GPU support
./build-gpu.sh

# Or build without GPU
./build.sh

# Built app location:
# macOS: ../backend/target/release/bundle/macos/Meetily.app
# Windows: ../backend/target/release/meetily.exe
# Linux: ../backend/target/release/meetily

Development Mode

# Run in dev mode with hot reload
cd frontend
pnpm dev

# In another terminal, run Rust backend
cd ../backend
cargo run

Contributing

Meetily is open source and welcomes contributions!

Ways to contribute:

  • πŸ› Report bugs
  • πŸ’‘ Suggest features
  • πŸ“ Improve documentation
  • πŸ”§ Submit pull requests
  • ⭐ Star the repository

Contribution areas:

  • Transcription accuracy improvements
  • New LLM integrations
  • UI/UX enhancements
  • Performance optimizations
  • Platform support (iOS, Android)

Contributing Guide β†’

Roadmap

Upcoming features:

  • 🎯 Speaker identification - Better diarization
  • πŸ“± Mobile apps - iOS and Android
  • πŸ”— Calendar integration - Auto-join meetings
  • πŸ’¬ Chat with meetings - Ask questions about past meetings
  • 🌐 Multi-language - Better international support
  • 🎨 Custom themes - Dark mode, high contrast
  • πŸ“Š Analytics - Meeting insights and trends

Conclusion

Meetily + Ollama + Gemma 3 4B gives you:

βœ… Complete privacy - No cloud, no tracking
βœ… Zero cost - Free, open source
βœ… Offline capability - Works without internet
βœ… Professional quality - 4x faster than Whisper
βœ… AI summaries - Intelligent meeting insights
βœ… Full control - Self-hosted, customizable

Perfect for:

  • Privacy-conscious professionals
  • Regulated industries (legal, medical, defense)
  • Teams wanting data sovereignty
  • Anyone tired of subscription fees

Get started:

  1. Install Ollama: curl -fsSL https://ollama.com/install.sh | sh
  2. Pull model: ollama pull gemma2:4b
  3. Download Meetily: GitHub Releases
  4. Start transcribing: Launch app β†’ New Meeting

No cloud. No subscriptions. No compromise.

Resources


Meetily is free and open source (MIT License). Star the project on GitHub to support development! Your meeting data belongs to you.

Written by

Found this helpful? Share your thoughts on GitHub