CareerCompass is an AI-powered career guidance platform that helps job seekers navigate their professional journey with confidence. By leveraging advanced AI technologies and vector search capabilities, we provide personalized job matching, CV analysis, and intelligent career recommendations.
Our Mission: To empower professionals with data-driven insights and AI-assisted guidance, making career transitions smoother and job searches more effective.
๐ฏ AI-Powered Job Search - Natural language job search using semantic understanding
๐ Smart CV Analysis - Compare your CV against job postings with AI-driven insights
๐ Intelligent Job Matching - Find jobs that match your skills and experience
๐ Learning Recommendations - Get personalized course and resource suggestions
๐ Multi-Source Integration - Scrape and analyze jobs from LinkedIn and other platforms
CareerCompass is built with a modern, scalable architecture:
- FastAPI - High-performance Python web framework
- Superlinked - Advanced multi-modal vector framework with unified embeddings for complex data relationships
- Qdrant - High-performance vector database optimized for similarity search and semantic retrieval at scale
- Groq/OpenAI - LLM integration for intelligent analysis
- Docker - Containerized deployment
- Next.js 16 - React framework with server-side rendering
- React 19 - Latest React with concurrent features
- Tailwind CSS 4 - Modern utility-first CSS framework
- Radix UI - Accessible component primitives
- Framer Motion - Smooth animations
- Axios - HTTP client for API communication
- Superlinked Vector Search - Semantic job matching in natural language
- LLM Integration - CV analysis and recommendations
- Web Scraping - LinkedIn and GitHub data extraction
- Kaggle Datasets - Job posting data aggregation
AI Models Used:
- Llama 3.3 70B Versatile - Primary LLM for CV analysis and recommendations (via Groq)
- Llama 4 Maverick 17B 128e Instruct - Natural language query processing for semantic search
- IBM Granite Embedding Small English R2 - Text embedding for vector search
Follow these steps to set up CareerCompass on your local machine:
- Python 3.12+
- Node.js 18+
- Docker & Docker Compose
- (Optional) NVIDIA GPU with CUDA support
git clone https://github.com/Llamallience/hackathon-final.git
cd hackathon-final# Create a Python 3.12 virtual environment
python -m venv venv
# Activate the environment
# On Windows (PowerShell):
.\venv\Scripts\Activate.ps1
# On Windows (Command Prompt):
.\venv\Scripts\activate.bat
# On Linux/Mac:
source venv/bin/activate
# Install backend dependencies
cd backend
pip install -r requirements.txt# Run the downloader script to fetch job datasets from Kaggle
python scripts/downloader.pybackend/data/jobs.csv has been created successfully. This file should contain the merged job postings from multiple Kaggle datasets.
# Process and normalize the job data
python scripts/normalize_jobs.pybackend/data/schema.json has been created successfully. This file contains the categorized job metadata.
Option A - For NVIDIA GPU Users:
# Use GPU-optimized docker-compose
docker-compose -f docker-compose.gpu.yml up -dOption B - For CPU Users:
# Use standard docker-compose
docker-compose up -dThis will start three services:
- Qdrant (Vector Database) - http://localhost:6333
- Superlinked (Vector Search API) - http://localhost:8080
- Backend (FastAPI) - http://localhost:8000
You have two options for loading the job data into Qdrant:
- Download the pre-processed snapshot from:
<snapshot_link>(to be provided) - Navigate to Qdrant Dashboard: http://localhost:6333/dashboard#/collections
- Click the "Upload Snapshot" button in the top-right corner
- Set collection name to: "default"
- Select and upload the downloaded snapshot file
- Wait until the collection status turns green โ
- Get the data loader configuration:
# Send GET request
curl http://localhost:8080/data-loader/Example response:
{
"result": {
"job_postings": "DataLoaderConfig(path='data/jobs.csv', format=<DataFormat.CSV: 1>, name='job_postings', pandas_read_kwargs={'chunksize': 1000, 'converters': {'job_skills': <function <lambda> at 0x7faf18023b50>}})"
}
}- Use the
namevalue from the response to trigger data processing:
# Send POST request with the data loader name
curl -X POST http://localhost:8080/data-loader/job_postings/run- Monitor the progress in Docker logs:
docker logs -f <superlinked_container_name>The data loading process may take several minutes depending on the dataset size.
# Navigate to frontend directory
cd ../frontend
# Install dependencies
npm install
# Start development server
npm run devThe application will be available at: http://localhost:3000
- Welcome Page - Get an overview of CareerCompass features
- AI Job Search - Use natural language to search for jobs
- CV vs Job Analysis - Upload your CV and compare it with job postings
- Job Match - Upload your CV to find the best matching jobs
Create .env files in the appropriate directories:
backend/.env:
GROQ_API_KEY=your_groq_api_key_here
SUPERLINKED_URL=http://localhost:8080backend/superlinked_app/.env:
# Ollama API settings for natural language query processing
OPENAI_API_KEY=your_groq_api_key_here
OPENAI_BASE_URL=https://api.groq.com/openai/v1
OPENAI_MODEL=meta-llama/llama-4-maverick-17b-128e-instruct
# Qdrant Vector Database (default localhost)
QDRANT_URL=http://qdrant:6333
QDRANT_API_KEY=
# Embedding model settings
TEXT_EMBEDDER_NAME=ibm-granite/granite-embedding-small-english-r2
# Data processing
CHUNK_SIZE=1000Note: You can use the same
GROQ_API_KEYfor bothbackend/.envandbackend/superlinked_app/.envfiles.
This project uses the following Kaggle datasets for demonstration purposes:
- Data Analyst Jobs - asaniczka/data-analyst-job-postings
- Data Engineer Jobs - asaniczka/linkedin-data-engineer-job-postings
- Data Scientist Jobs - asaniczka/data-scientist-linkedin-job-postings
- Software Engineer Jobs - asaniczka/software-engineer-job-postings-linkedin
These datasets are automatically downloaded and merged by the downloader.py script during the setup process.
Once the backend is running, access the interactive API documentation:
- Swagger UI: http://localhost:8000/docs
- ReDoc: http://localhost:8000/redoc
We welcome contributions! Please follow these steps:
- Fork the repository
- Create a feature branch
- Commit your changes
- Push to your fork
- Open a Pull Request
This project is licensed under the MIT License.
Built with โค๏ธ by the Llamallience team during the hackathon.
