This guide covers setting up Code Scanner on Linux distributions.
# Update package list
sudo apt update
# Install Python 3.10+
sudo apt install python3 python3-pip python3-venv
# Verify installation
python3 --version # Should be 3.10 or higher# Install Python
sudo dnf install python3 python3-pip
# Verify installation
python3 --version# Install Python
sudo pacman -S python python-pip
# Verify installation
python --versionGit is required for tracking file changes in your repositories.
sudo apt install git
git --versionsudo dnf install git
git --versionsudo pacman -S git
git --versionUniversal Ctags is required for symbol indexing, which enables AI tools to efficiently navigate your codebase.
sudo apt install universal-ctags
ctags --version # Should show "Universal Ctags"sudo dnf install ctags
ctags --versionsudo pacman -S ctags
ctags --versionNote: Make sure it's "Universal Ctags" (not "Exuberant Ctags"). Check with
ctags --version.
Ripgrep is required for fast code search across the repository.
sudo apt install ripgrep
rg --versionsudo dnf install ripgrep
rg --versionsudo pacman -S ripgrep
rg --versionUV is the recommended package manager:
# Install UV
curl -LsSf https://astral.sh/uv/install.sh | sh
# Add to PATH (if not automatic)
echo 'export PATH="$HOME/.local/bin:$PATH"' >> ~/.bashrc
source ~/.bashrc
# Verify installation
uv --version# Clone the repository
git clone https://github.com/ubego/Code-Scanner.git
cd code-scanner
# Install dependencies with UV
uv sync
# Verify installation
uv run code-scanner --helpChoose one of the following backends:
Ollama is lightweight and easy to use on Linux.
# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh
# Start Ollama (runs as a service on most distros)
ollama serve &
# Pull a model
ollama pull qwen3:4b
# Verify it's working
ollama list
curl http://localhost:11434/api/tagsConfiguration for Ollama:
Copy a language-specific example (e.g., sample_configs/python-config.toml) and update the [llm] section:
[llm]
backend = "ollama"
host = "localhost"
port = 11434
model = "qwen3:4b"
timeout = 120
context_limit = 16384 # RequiredLM Studio provides a GUI and is great for trying different models.
# Download the AppImage from https://lmstudio.ai
# Example for version 0.3.x:
wget https://releases.lmstudio.ai/linux/x86_64/LM-Studio-x.x.x.AppImage
# Make it executable
chmod +x LM-Studio-*.AppImage
# Run LM Studio
./LM-Studio-*.AppImageIn LM Studio:
- Search for "qwen2.5-coder-7b-instruct"
- Download the model
- Load the model
- Go to "Local Server" tab (click the "<->" icon)
- Set "Context Length" to at least 16384 in the right sidebar
- Click "Start Server" (default port: 1234)
Configuration for LM Studio:
Copy a language-specific example (e.g., sample_configs/python-config.toml) and update the [llm] section:
[llm]
backend = "lm-studio"
host = "localhost"
port = 1234
timeout = 120
context_limit = 16384 # Required# Navigate to your project
cd /path/to/your/project
# Create code_scanner_config.toml (see sample_configs/)
cp /path/to/code-scanner/sample_configs/python-config.toml code_scanner_config.toml
# Run the scanner (runs continuously until Ctrl+C)
uv run code-scannerYou can configure Code Scanner to start automatically on login using systemd user services.
- Code Scanner installed and working via command line
- systemd user services available (most modern Linux distributions)
- LLM backend (Ollama or LM Studio) installed
Run the autostart script:
./scripts/autostart-linux.shThe script will interactively guide you through:
- Project path - The directory to scan
- Config file path - Your
code_scanner_config.tomllocation - Test launch - Verifies the scanner works before registering
- Service registration - Creates a systemd user service
- Detects legacy services and offers to remove them
- Validates paths for project and config file
- Test launches the scanner to verify configuration
- Creates systemd service at
~/.config/systemd/user/code-scanner.service - Enables autostart on user login
- Includes 60-second delay to allow LLM backend startup
# Check status
systemctl --user status code-scanner
# View logs
journalctl --user -u code-scanner -f
# Stop service
systemctl --user stop code-scanner
# Restart service
systemctl --user restart code-scanner
# Disable autostart
systemctl --user disable code-scanner
# Remove service completely
systemctl --user stop code-scanner
systemctl --user disable code-scanner
rm ~/.config/systemd/user/code-scanner.service
systemctl --user daemon-reload- Scanner log:
~/.code-scanner/code_scanner.log - Results:
<project>/code_scanner_results.md - systemd logs:
journalctl --user -u code-scanner
Service won't start:
- Check logs:
journalctl --user -u code-scanner -f - Verify config file path is correct
- Ensure LLM backend is running
- Try increasing the startup delay in the service file
Lock file errors: Another instance may be running. Check with:
cat ~/.code-scanner/code_scanner.lock
ps aux | grep code-scannerDelete stale lock if needed:
rm ~/.code-scanner/code_scanner.lockUser services not working: Enable lingering for your user:
loginctl enable-linger $USER# Add yourself to the ollama group
sudo usermod -aG ollama $USER
# Log out and back in, or:
newgrp ollama# Install FUSE (required for AppImages)
# Ubuntu/Debian:
sudo apt install fuse libfuse2
# Fedora:
sudo dnf install fuse fuse-libsLarge models require significant RAM. Try:
- Use a smaller model (7B instead of 13B)
- Close other applications
- Use Ollama's memory-efficient quantized models
First load is slow due to model loading. Subsequent queries are faster. Consider keeping Ollama running as a service.