Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
50 changes: 50 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
# Python
__pycache__/
*.py[cod]
*$py.class
*.so
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
pip-wheel-metadata/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
.env
venv/
ENV/
env/

# Jupyter Notebook
.ipynb_checkpoints
*.ipynb_checkpoints/

# IDEs
.vscode/
.idea/
*.swp
*.swo
*~
.DS_Store

# Node modules
node_modules/
npm-debug.log
yarn-error.log

# Temporary files
*.tmp
*.log
.cache/
121 changes: 119 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,2 +1,119 @@
# MechanisticInterpretabilityShowcase
Mechanistic Interpretability tools we are developing.
# Mechanistic Interpretability Showcase

A research showcase page demonstrating mechanistic interpretability tools developed by the Precision Neuro Lab. This project aims to make neural networks more transparent and understandable through visualization and analysis techniques.

## Features

- **Clean, Professional Interface**: Modern web design showcasing research and tools
- **Responsive Design**: Works seamlessly across desktop, tablet, and mobile devices
- **Interactive Navigation**: Smooth scrolling and intuitive user experience
- **Extensible Structure**: Easy to add demos, research papers, and interactive visualizations

## Getting Started

### Prerequisites

No build tools or dependencies required! This is a pure HTML/CSS/JavaScript project.

### Running Locally

1. Clone the repository:
```bash
git clone https://github.com/PrecisionNeuroLab/MechanisticInterpretabilityShowcase.git
cd MechanisticInterpretabilityShowcase
```

2. Open `index.html` in your web browser:
- **Option 1**: Double-click `index.html`
- **Option 2**: Use a local server (recommended for development):
```bash
# Using Python 3
python -m http.server 8000

# Using Node.js (if http-server is installed)
npx http-server
```
- **Option 3**: Use VS Code Live Server extension

3. Navigate to `http://localhost:8000` in your browser

## Project Structure

```
MechanisticInterpretabilityShowcase/
├── index.html # Main HTML page
├── styles.css # Styling and layout
├── script.js # Interactive functionality
├── docs/ # Documentation and examples
├── README.md # This file
└── LICENSE # License information
```

## Customization

### Adding New Research Cards

Edit `index.html` and add new cards in the `research-section`:

```html
<div class="research-card">
<h3>Your Research Title</h3>
<p>Description of your research.</p>
</div>
```

### Adding Interactive Demos

Add demo cards in the `demos-section` of `index.html`:

```html
<div class="demo-card">
<h3>Demo Title</h3>
<p>Demo description.</p>
<a href="demo-link.html" class="btn btn-primary">Try Demo</a>
</div>
```

### Styling

Modify colors and styles in `styles.css`. Key CSS variables are defined at the top:

```css
:root {
--primary-color: #2563eb;
--secondary-color: #7c3aed;
/* Add more custom variables */
}
```

## Contributing

We welcome contributions! Please feel free to submit issues or pull requests.

1. Fork the repository
2. Create your feature branch (`git checkout -b feature/amazing-feature`)
3. Commit your changes (`git commit -m 'Add some amazing feature'`)
4. Push to the branch (`git push origin feature/amazing-feature`)
5. Open a Pull Request

## Future Enhancements

- [ ] Add interactive visualizations using D3.js or Three.js
- [ ] Integrate Jupyter notebooks for live demos
- [ ] Add research paper listings with links
- [ ] Create video tutorials section
- [ ] Add team member profiles
- [ ] Implement search functionality

## License

This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.

## Contact

For questions or collaborations, please visit our [GitHub repository](https://github.com/PrecisionNeuroLab/MechanisticInterpretabilityShowcase).

## Acknowledgments

- Inspired by research in mechanistic interpretability
- Built for the Precision Neuro Lab community
35 changes: 35 additions & 0 deletions docs/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
# Documentation

Welcome to the Mechanistic Interpretability Showcase documentation!

## Contents

- [Getting Started](getting-started.md) - Quick start guide
- [Examples](examples.md) - Example use cases and demos
- [Contributing](contributing.md) - How to contribute to the project

## Overview

This showcase demonstrates tools and techniques for understanding neural network behavior through mechanistic interpretability. Our goal is to make AI systems more transparent and interpretable.

## What is Mechanistic Interpretability?

Mechanistic interpretability is the study of understanding neural networks by identifying and analyzing the specific algorithms and circuits they learn. Rather than treating neural networks as black boxes, this approach aims to reverse-engineer their internal mechanisms.

### Key Concepts

1. **Feature Visualization**: Understanding what individual neurons or layers respond to
2. **Circuit Analysis**: Identifying computational pathways within networks
3. **Activation Patterns**: Analyzing how information flows through the network
4. **Intervention Studies**: Testing hypotheses about network behavior through targeted modifications

## Resources

- [Main Showcase Page](../index.html)
- [GitHub Repository](https://github.com/PrecisionNeuroLab/MechanisticInterpretabilityShowcase)

## Quick Links

- **Research**: Learn about our latest findings
- **Demos**: Try interactive demonstrations
- **Tools**: Access our open-source tools
Loading