Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .eslintignore
Original file line number Diff line number Diff line change
@@ -1,2 +1,3 @@
lib
examples
benchmark/*.js
126 changes: 126 additions & 0 deletions .github/workflows/benchmark.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,126 @@
name: Benchmark

on:
pull_request:
branches:
- master

permissions:
contents: read
pull-requests: write

jobs:
benchmark:
runs-on: ubuntu-latest
steps:
- name: Checkout PR branch
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0

- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: '20'
cache: 'yarn'

- name: Install dependencies
run: yarn install --frozen-lockfile

- name: Build project
run: yarn build

- name: Run PR benchmarks
id: pr-benchmark
continue-on-error: true
run: |
echo "Running benchmarks on PR branch..."
yarn benchmark > pr-benchmark.txt 2>&1
EXIT_CODE=$?
cat pr-benchmark.txt
exit $EXIT_CODE

- name: Check PR benchmark status
if: steps.pr-benchmark.outcome == 'failure'
run: echo "⚠️ PR benchmarks failed - will attempt to extract partial results"

- name: Extract PR results
id: pr-results
run: node benchmark/extract-results.js pr-benchmark.txt pr-results.json

- name: Save benchmark scripts
run: |
mkdir -p /tmp/benchmark-scripts
cp benchmark/extract-results.js /tmp/benchmark-scripts/
cp benchmark/compare-results.js /tmp/benchmark-scripts/

- name: Checkout base branch
run: |
git fetch origin ${{ github.event.pull_request.base.ref }}
git checkout origin/${{ github.event.pull_request.base.ref }}

- name: Install dependencies (base)
run: yarn install --frozen-lockfile

- name: Build project (base)
run: yarn build

- name: Run base benchmarks
id: base-benchmark
continue-on-error: true
run: |
echo "Running benchmarks on base branch..."
yarn benchmark > base-benchmark.txt 2>&1
EXIT_CODE=$?
cat base-benchmark.txt
exit $EXIT_CODE

- name: Check base benchmark status
if: steps.base-benchmark.outcome == 'failure'
run: echo "⚠️ Base benchmarks failed - will attempt to extract partial results"

- name: Extract base results
id: base-results
run: node /tmp/benchmark-scripts/extract-results.js base-benchmark.txt base-results.json

- name: Compare and format results
id: compare
run: node /tmp/benchmark-scripts/compare-results.js base-results.json pr-results.json > comment.txt

- name: Post comment to PR
uses: actions/github-script@v6
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
script: |
const fs = require('fs');
const comment = fs.readFileSync('comment.txt', 'utf8');

// Find existing benchmark comment
const { data: comments } = await github.rest.issues.listComments({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number,
});

const benchmarkComment = comments.find(comment =>
comment.body.includes('📊 Benchmark Results')
);

if (benchmarkComment) {
// Update existing comment
await github.rest.issues.updateComment({
owner: context.repo.owner,
repo: context.repo.repo,
comment_id: benchmarkComment.id,
body: comment
});
} else {
// Create new comment
await github.rest.issues.createComment({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number,
body: comment
});
}
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -5,3 +5,6 @@ lib
yarn-error.log
package-lock.json
coverage
*-benchmark.txt
*-results.json
comment.txt
101 changes: 101 additions & 0 deletions benchmark/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,101 @@
# Benchmarks

This directory contains performance benchmarks for node-casbin.

## Running Benchmarks Locally

To run the benchmarks locally:

```bash
yarn benchmark
```

This will:
1. Build the CommonJS version of the library
2. Build the benchmark suite
3. Run all benchmarks and display results

## What is Benchmarked

The benchmark suite tests the performance of:

### RBAC Model
- `enforce()` (async) - both allow and deny cases
- `enforceSync()` - both allow and deny cases
- `getRolesForUser()` - get user roles
- `hasRoleForUser()` - check if user has a specific role

### ABAC Model
- `enforce()` (async) - attribute-based access control
- `enforceSync()` - attribute-based access control

### Basic Model
- `enforce()` (async) - basic access control
- `enforceSync()` - basic access control

### Policy Management
- `getPolicy()` - retrieve all policies
- `hasPolicy()` - check if policy exists
- `getFilteredPolicy()` - retrieve filtered policies

## Automated Benchmarking

The benchmark workflow automatically runs on every Pull Request:

1. **Runs benchmarks on PR branch** - measures performance of proposed changes
2. **Runs benchmarks on base branch** - establishes baseline performance
3. **Compares results** - calculates percentage changes
4. **Posts comment to PR** - displays results in an easy-to-read table

### Understanding Benchmark Results

The PR comment will show:
- **🚀** - Significant improvement (>5%)
- **✅** - Improvement (0-5%)
- **➖** - No significant change
- **⬇️** - Minor regression (0-5%)
- **⚠️** - Regression (>5%)

### What to Do About Regressions

If your PR shows performance regressions:

1. **Review the changes** - identify what might cause the slowdown
2. **Profile the code** - use Node.js profiling tools to find bottlenecks
3. **Consider alternatives** - can the same functionality be achieved more efficiently?
4. **Document trade-offs** - if the regression is unavoidable, document why the change is necessary

Small regressions (<5%) are generally acceptable if:
- The change adds important functionality
- The change improves code maintainability
- The change fixes a bug or security issue

## Adding New Benchmarks

To add new benchmarks, edit `benchmark/benchmark.ts`:

```typescript
// Create a new suite
const mySuite = createSuite('My Feature');

mySuite
.add('My benchmark', {
defer: true, // for async tests
fn: async (deferred: Benchmark.Deferred) => {
await myFunction();
deferred.resolve();
},
})
.on('complete', () => {
resolve();
})
.run({ async: true });
```

For synchronous tests, omit the `defer` option:

```typescript
mySuite.add('My sync benchmark', () => {
mySyncFunction();
});
```
Loading
Loading