-
Notifications
You must be signed in to change notification settings - Fork 0
feat: complete modernization to match transition-manager patterns #20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
## Linting Migration (ESLint+Prettier → Biome) - Remove ESLint, Prettier, and related configs - Add Biome for unified linting and formatting - Update lint-staged and pre-commit hooks ## Build System (NCC → Rollup ESM) - Replace @vercel/ncc with Rollup for ESM bundling - Add rollup.config.ts with TypeScript plugin - Update tsconfig.json for ESM module resolution ## Testing (Jest → Vitest) - Migrate from Jest to Vitest 4.x - Add comprehensive test coverage (97%+ statements) - New test files: fs-helper, utils, Issue, EventManager, action - Fix Vitest 4.x mock compatibility with class syntax ## jira.js Upgrade (v2 → v5) - Upgrade jira.js from v2.15.14 to v5.2.2 - Update all imports to use Version2Client pattern - Update @actions/core and @actions/github ## E2E Infrastructure - Add Docker Compose setup for Jira Data Center - Add Playwright-based setup wizard automation - Add E2E test scripts and configuration - Add snapshot save/restore for fast CI ## Documentation - Add CLAUDE.md with project instructions - Add Google-style JSDoc to all TypeScript files - Add e2e/README.md with testing documentation ## CI/CD Updates - Add e2e-jira.yml workflow - Add build-e2e-snapshots.yml workflow - Add unit-tests.yml workflow - Update existing workflows for Biome/Vitest
|
Important Review skippedReview was skipped due to path filters ⛔ Files ignored due to path filters (2)
CodeRabbit blocks several paths by default. You can override this behavior by explicitly including those paths in the path filters. For example, including You can disable this status message by setting the Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. 📝 WalkthroughWalkthroughThe PR replaces ESLint/Prettier with Biome, migrates tests from Jest to Vitest, adds comprehensive E2E infrastructure (Docker Compose, Playwright, snapshot save/restore/check), introduces multiple GitHub Actions workflows (E2E, snapshot releases, Claude integrations, CI failure responder), and refactors core runtime into typed classes and utilities while adding extensive unit and E2E tests. Changes
Sequence Diagram(s)sequenceDiagram
participant GH as GitHub Actions
participant Repo as Repository
participant Runner as CI Runner (ubuntu-latest)
participant Docker as Docker Engine
participant Jira as Jira Container
participant MySQL as MySQL Container
participant Release as GitHub Release / Artifact
GH->>Repo: workflow trigger (schedule / manual / PR)
GH->>Runner: start job, checkout repo
Runner->>Runner: setup Node, install deps, build e2e assets
Runner->>Docker: pull Jira & MySQL images
Docker->>MySQL: start mysql (healthcheck)
Docker->>Jira: start jira (depends on mysql healthy)
Runner->>Jira: run e2e:setup / seed (Playwright or API)
Runner->>Docker: stop containers to ensure consistent state
Runner->>Runner: create e2e snapshot tar.gz
Runner->>Release: create/update release and upload snapshot artifact
GH->>GH: update release notes (size, timestamp)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This pull request completes a comprehensive modernization of the repository to match patterns from github-action-jira-transition-manager, including:
Changes:
- Migrated linting from ESLint+Prettier to Biome for faster, unified tooling
- Replaced build system from @vercel/ncc to Rollup with ESM output
- Upgraded testing from Jest to Vitest 4.x with 171 tests and 97%+ coverage
- Updated jira.js from v2.15.14 to v5.2.2 with new type imports
- Added comprehensive E2E testing infrastructure with Docker, Playwright, and snapshot management
- Enhanced documentation with CLAUDE.md and extensive JSDoc comments
Reviewed changes
Copilot reviewed 55 out of 71 changed files in this pull request and generated 8 comments.
Show a summary per file
| File | Description |
|---|---|
| tsconfig.json | Modernized to ESNext modules with Bundler resolution, removed Jest types |
| vitest.config.ts | Unit test configuration with 50s timeout and coverage reporting |
| vitest.e2e.config.ts | E2E test configuration with sequential execution and 60s timeout |
| rollup.config.ts | New ESM bundler replacing @vercel/ncc |
| biome.json | Comprehensive Biome configuration replacing ESLint+Prettier |
| package.json | Updated dependencies and scripts for new toolchain |
| src/*.ts | Added extensive JSDoc documentation and type import syntax |
| tests/*.ts | Migrated from Jest to Vitest with vi.mock patterns |
| e2e/ | Complete E2E infrastructure with Docker Compose, Playwright setup, and snapshot utilities |
| .github/workflows/ | New workflows for unit tests, E2E tests, snapshots, and Claude integration |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
src/EventManager.ts
Outdated
| * await manager.updateJiraFixVersion(); | ||
| * ``` | ||
| */ | ||
| async updateJiraFixVersion(): Promise<undefined[]> { |
Copilot
AI
Jan 24, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The return type was changed from Promise<void[]> to Promise<undefined[]>. This is more accurate since the Promise.all returns undefined values explicitly set in line 276. However, this is an internal implementation detail - consider whether returning void[] (as a semantic "no value") is clearer than undefined[] (an array of explicit undefined values).
| applyIssueList.push( | ||
| new Issue(issueKey, this.jira, this.argv).build().then(async (issueObj) => { | ||
| await issueObj.apply(); | ||
| return undefined; | ||
| }), |
Copilot
AI
Jan 24, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The .then(async (issueObj) => { await issueObj.apply(); return undefined; }) pattern explicitly returns undefined to satisfy the Promise<undefined[]> type. This could be simplified to .then((issueObj) => issueObj.apply()) if the return type were changed to Promise<void[]>. The explicit undefined return adds unnecessary complexity.
biome.json
Outdated
| }, | ||
| "files": { | ||
| "ignoreUnknown": true, | ||
| "includes": ["**", "!**/node_modules", "!**/dist", "!**/lib", "!**/coverage", "!**/e2e/dist", "!**/e2e/**/*.js", "!**/*.d.ts", "!**/commitlint.config.js", "!**/CHANGELOG.md", "!**/licenses.txt", "!**/.markdownlint.json", "!**/.markdownlintignore", "!**/.cspell.json", "!**/tsconfig.json", "!**/tsconfig.*.json"] |
Copilot
AI
Jan 24, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Biome configuration includes e2e//*.js in the ignore patterns (line 10: !**/e2e/**/*.js), but e2e/dist/ contains compiled JavaScript files that should be ignored. This is redundant since e2e/dist is already excluded via !**/e2e/dist. Consider whether the e2e//*.js pattern is needed, as it may prevent linting of intentional .js files in the e2e directory.
| "includes": ["**", "!**/node_modules", "!**/dist", "!**/lib", "!**/coverage", "!**/e2e/dist", "!**/e2e/**/*.js", "!**/*.d.ts", "!**/commitlint.config.js", "!**/CHANGELOG.md", "!**/licenses.txt", "!**/.markdownlint.json", "!**/.markdownlintignore", "!**/.cspell.json", "!**/tsconfig.json", "!**/tsconfig.*.json"] | |
| "includes": ["**", "!**/node_modules", "!**/dist", "!**/lib", "!**/coverage", "!**/e2e/dist", "!**/*.d.ts", "!**/commitlint.config.js", "!**/CHANGELOG.md", "!**/licenses.txt", "!**/.markdownlint.json", "!**/.markdownlintignore", "!**/.cspell.json", "!**/tsconfig.json", "!**/tsconfig.*.json"] |
| */ | ||
| export function nullIfEmpty(str?: string[] | null): string[] | null { | ||
| if (!str || !Array.isArray(str)) { | ||
| if (!(str && Array.isArray(str))) { |
Copilot
AI
Jan 24, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The condition !(str && Array.isArray(str)) uses De Morgan's law to check if str is falsy OR not an array. While semantically equivalent to !str || !Array.isArray(str), this pattern is less readable. Consider reverting to the original condition for clarity: if (!str || !Array.isArray(str)).
| } else if (isError(error)) { | ||
| core.error(error); |
Copilot
AI
Jan 24, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The error handling was changed from checking axios.isAxiosError with detailed response handling to a simple isError check. With the migration from jira.js v2 to v5, verify that v5 still uses axios under the hood and that error responses maintain the same structure. If jira.js v5 uses a different HTTP client (like fetch), error details may need different handling.
| this.client = new Version2Client({ | ||
| host: this.baseUrl, | ||
| telemetry: false, | ||
| authentication: { | ||
| basic: { | ||
| // username: this.email, | ||
| // password: this.token | ||
| email: this.email, | ||
| apiToken: this.token, | ||
| }, | ||
| }, | ||
| }); |
Copilot
AI
Jan 24, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The telemetry: false option was removed from the Version2Client configuration. This may enable telemetry in jira.js v5. Verify whether jira.js v5 has telemetry enabled by default and if so, explicitly disable it to maintain privacy and avoid unexpected network calls: telemetry: false.
| const config = getE2EConfig(); | ||
| const client = new JiraE2EClient(config); | ||
| const startTime = Date.now(); | ||
| const timeout = 120000; // 2 minutes max, not 600s |
Copilot
AI
Jan 24, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The wait timeout was reduced from 600s (10 minutes) to 120s (2 minutes) with fail-fast behavior. While this is good for CI speed, 2 minutes may be too aggressive for cold Jira starts, especially in resource-constrained environments. The comment on line 36 says "2 minutes max, not 600s" which contradicts the previous 10-minute timeout documented in e2e/scripts/e2e-config.ts line 76. Consider increasing to 3-5 minutes or making this configurable.
| "bracketSpacing": true, | ||
| "bracketSameLine": true | ||
| }, | ||
| "globals": ["console", "process", "NodeJS", "describe", "it", "test", "expect", "beforeAll", "afterAll", "beforeEach", "afterEach", "vi"] |
Copilot
AI
Jan 24, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The test globals are defined in biome.json line 99 (vi, describe, it, expect, etc.), but this doesn't provide TypeScript types. These globals need to be available to TypeScript as well. Ensure that either:
- Test files use
import { describe, it, expect } from 'vitest'explicitly, OR - tsconfig.json includes vitest types in the
typesarray, OR - Test files include
/// <reference types="vitest/globals" />
Currently, the approach is inconsistent - some test files import from vitest while others may rely on globals.
| "globals": ["console", "process", "NodeJS", "describe", "it", "test", "expect", "beforeAll", "afterAll", "beforeEach", "afterEach", "vi"] | |
| "globals": ["console", "process", "NodeJS"] |
- Change action.yml runtime from node22 to node20 (node22 not supported) - Rebuild dist/ to sync with source changes - Apply Biome lint fixes
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 18
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
.github/workflows/create_tag.yml (1)
33-53: Push version bump and tags to remote before creating release.
yarn version --patchcreates a local commit and git tag, but the workflow never pushes them to the remote repository. Whensoftprops/action-gh-releaseattempts to create a GitHub Release on line 54, the tag won't exist remotely and the release step will fail. The workflow also lackscontents: writepermission to push changes.Add a push step after the version update and configure the required permission:
Fix: Push after version bump
- name: Update version id: version env: CHANGELOG: | # v%s ${{ steps.tag_version.outputs.changelog }} run: | ## yarn version will update the README with the new tagged version in the examples ## and also creates a step output called 'tag' which is the new version set -x yarn version --patch --message "${CHANGELOG}" + - name: Push version bump and tags + run: | + git push --follow-tags origin HEAD:mainAlso add at the top of the workflow:
permissions: contents: writesrc/Issue.ts (2)
135-142: Inconsistent project key regex between Issue and utils.The regex here requires at least 2 letters (
[A-Za-z]{2,}) and 2 digits (\d{2,}), whileissueIdRegExinutils.tsuses[\dA-Za-z]+(allowing digits in project key) and\d+(allowing single-digit issue numbers). This inconsistency could cause issues to be parsed differently in different contexts.Consider aligning the patterns or documenting the intentional differences.
293-304: Potential runtime error iffieldsis undefined.If
this.issueObject.fieldsis undefined (e.g., API returns minimal data), the optional chaining?.fixVersionsreturnsundefined, and calling.map()onundefinedcast asFixVersionObject[]will throw a TypeError.Proposed defensive fix
async getIssueFixVersions(fresh = false): Promise<string[]> { if (fresh || !this.issueObject) { await this.getJiraIssueObject(); } if (!this.issueObject) { core.error(`Issue object can't be queried from Jira`); return [] as string[]; } - return (this.issueObject?.fields?.fixVersions as FixVersionObject[])?.map((v) => { - return v.name as string; - }); + const fixVersions = this.issueObject?.fields?.fixVersions as FixVersionObject[] | undefined; + if (!fixVersions) { + return []; + } + return fixVersions.map((v) => v.name as string); }
🤖 Fix all issues with AI agents
In @.github/workflows/claude-code-review.yml:
- Around line 22-26: Update the GitHub Actions permissions block so the Claude
Code Review action can post PR comments: change the pull-requests permission
from read to write (modify the pull-requests key) and consider updating contents
and issues to write as recommended (update contents and issues keys), keeping
id-token: write as-is; ensure the permissions section contains pull-requests:
write, contents: write, issues: write, and id-token: write.
In @.github/workflows/claude.yml:
- Around line 21-26: Update the workflow permissions so Claude can post
comments: change the permissions block keys "pull-requests" and "issues" from
read to write (keep "contents: read", "id-token: write", and "actions: read"
as-is) so the "permissions" section grants the workflow the ability to
create/update PR and issue comments.
In @.github/workflows/publish_action.yml:
- Line 14: The release action now uses
technote-space/release-github-actions@v8.0.3 which changed the CLEAN_TARGETS
default to also remove .mjs/.cjs/.mts/.cts; explicitly set the CLEAN_TARGETS
input for the action (or confirm the expanded list is acceptable) to avoid
inadvertently deleting release artifacts — edit the step that references uses:
technote-space/release-github-actions@v8.0.3 and add a CLEAN_TARGETS input with
the previous/default pattern you want (or a whitelist of safe extensions) so
cleanup behavior is deterministic.
In @.github/workflows/respond-to-ci-failures.yml:
- Around line 133-145: The code incorrectly assumes
github.rest.actions.downloadJobLogsForWorkflowRun returns JSON in `data`;
instead capture the full response and handle the 302 redirect by reading the
`response.headers.location` URL, then fetch that URL (using octokit.request or
fetch) to download the log archive and extract or read its content before
splitting into lines to populate `jobInfo.logs`; alternatively, if you don't
want to follow redirects, set `jobInfo.logs` to a URL string pointing to the job
logs using the `response.headers.location` value (or the standard GitHub Actions
job logs URL) and avoid calling `.split` on undefined — update the try block
that calls `github.rest.actions.downloadJobLogsForWorkflowRun` and the error
handler `logError` accordingly.
- Around line 201-205: The step currently embeds outputs directly into a quoted
string for JSON.parse (e.g. JSON.parse('${{ steps.get-failures.outputs.failures
}}')), which breaks when outputs contain newlines/quotes; instead export the
outputs as environment variables in the workflow step (e.g. env: FAILURES: ${{
steps.get-failures.outputs.failures }} and optionally EXISTING_COMMENT_ID: ${{
steps.check-comment.outputs.comment_id }}), then in the script read and parse
them from process.env (e.g. JSON.parse(process.env.FAILURES) and use
process.env.EXISTING_COMMENT_ID) or use shell-safe parsing (echo "$FAILURES" |
jq), replacing occurrences of failures and existingCommentId in the code
accordingly.
In @.husky/pre-commit:
- Line 1: The pre-commit hook currently only runs "yarn lint-staged" but lacks
the standard Husky header, so add the missing shebang and Husky shim lines used
by other hooks (e.g., the leading "#!/bin/sh" and the ". \"$(dirname
\"$0\")/_/husky.sh\"" shim) at the top of the pre-commit script so Husky
correctly initializes before running the existing "yarn lint-staged" command in
the pre-commit hook.
In `@e2e/README.md`:
- Around line 180-203: The fenced code block showing the directory structure in
e2e/README.md is missing a language tag; update that triple-backtick fence to
include a neutral language (e.g., text) so markdownlint is satisfied — locate
the directory tree block in e2e/README.md (the block that begins with "e2e/")
and change the opening ``` to ```text.
In `@e2e/scripts/jira-client.ts`:
- Around line 520-529: The JQL query in ensureIssue interpolates the summary
directly into searchIssues, risking JQL injection; sanitize/escape the summary
before building the query (e.g., replace backslashes with \\ and double quotes
with \\" or use a JQL-escaping helper) and then call searchIssues with the
escaped value; update the code around the searchResult = await
this.searchIssues(...) line (inside ensureIssue) to use the escaped summary
variable so searchResult.issues[0] is returned from a safe query.
In `@e2e/scripts/setup-jira-playwright.ts`:
- Around line 365-367: The current console output prints secrets (ADMIN_USER and
ADMIN_PASS) to stdout; update the final success logging so it does not emit the
full ADMIN_PASS value — either remove the password from the message, replace it
with a redacted representation (e.g., show only first/last char or fixed
asterisks), or use a central redact/logging helper before calling console.log;
locate the success logging lines that reference ADMIN_USER and ADMIN_PASS and
change them to log either only ADMIN_USER or a masked form of ADMIN_PASS (or
omit the password entirely) to prevent leaking CI-provided credentials.
In `@e2e/scripts/setup-jira.ts`:
- Around line 277-281: The debug log prints full session cookies (cookieHeader
built from cookies.join('; ')), which leaks auth material; change the logging so
it never outputs raw cookie or CSRF values — either gate the debug message
behind a strict debug flag (e.g. only log when process.env.DEBUG_JIRA ===
'true') or redact the value before printing (e.g. replace contents with
"[REDACTED]" or show only a fixed-safe fingerprint like a hash or first/last 4
chars). Apply the same change to the other occurrence that logs cookies/CSRF
tokens (the analogous cookieHeader/CSRF logging near lines 366-374) so neither
raw cookieHeader nor CSRF tokens are emitted to CI.
- Around line 404-416: The current check returns response.status < 500 which
treats 4xx as success; update the final return so 4xx are failures by returning
only statuses < 400 (or equivalently rely on response.ok plus allowed redirects)
— i.e. change the final expression using response.status to require < 400 (or
use response.ok || response.status === 302 || response.status === 303) so that
4xx responses cause the license submission to fail; locate the block using the
response variable in setup-jira.ts and replace the trailing return accordingly.
In `@e2e/scripts/snapshot-check.ts`:
- Around line 125-131: The function formatAge wrongly returns "1 hours ago"
because the hours < 24 branch treats 1 like plural; update formatAge to handle
the singular case explicitly (e.g., check if hours === 1 and return "1 hour ago"
before the `${hours} hours ago` path) or otherwise pluralize dynamically; modify
the function identified as formatAge to add that singular-hours conditional so
"1 hour ago" is returned when hours === 1.
- Around line 146-147: The code uses __dirname to build inputDir which fails in
ESM; replace its usage by deriving a dirname from import.meta.url using
fileURLToPath and dirname (or compute the snapshots path inline from
import.meta.url) and then set inputDir = path.join(derivedDir, '..', '..',
'snapshots'); update the top of the module to import fileURLToPath from
'node:url' and dirname from 'node:path' (or perform the inline conversion) so
functions/variables like inputDir rely on the computed __dirname equivalent
instead of the undefined __dirname.
In `@e2e/scripts/snapshot-restore.ts`:
- Around line 111-128: The functions removeVolume and createVolume (and the
later code that runs docker commands around snapshot paths) currently use
execSync with interpolated shell strings which is unsafe for injection and
spaces; replace execSync usage with child_process.spawnSync or execFileSync and
pass docker and its arguments as an argv array (e.g., ['volume','rm',
volumeName] or ['run', ... , snapshotPath]) so arguments are not shell-parsed,
propagate/return success based on the spawned process exitCode and include
stderr in logs as needed, and ensure any snapshot path usage also uses argv to
avoid shell interpolation.
In `@e2e/scripts/snapshot-save.ts`:
- Around line 111-148: The saveVolume function builds a shell command string and
calls execSync(cmd.join(' ')) which is vulnerable to injection via outputPath or
volumeName; change this to use a safe child process API (e.g., spawnSync or
execFileSync) that accepts an argv array instead of a single shell string (keep
the argv elements that are currently in cmd but pass them as separate args),
ensure you do not pass { shell: true }, and construct the volume and backup
paths with path.basename/path.dirname only (or validate/sanitize outputPath)
before using them; update error handling in saveVolume to capture
spawnSync/execFileSync stderr/stdout for logging and still return false on
failure.
In `@e2e/scripts/wait-for-jira.ts`:
- Around line 106-111: The ESM-incompatible check using "require.main ===
module" should be replaced with an import.meta.url-based entry check: add
"fileURLToPath" import from "url" at the top, then replace the require.main
block with a runtime comparison using fileURLToPath(import.meta.url) ===
process.argv[1] and call waitForJira().catch(...) inside that branch; reference
symbols: waitForJira, require.main === module (remove), import.meta.url,
fileURLToPath, and process.argv.
In `@e2e/tests/fixversion.e2e.test.ts`:
- Around line 9-13: The tests hard-code projectKey as 'E2E' instead of using the
test configuration; change the declaration of the projectKey constant to read
from the initialized config (from getE2EConfig()) so tests use
config.test.projectKey at runtime; update any references to the projectKey
variable in this file (e2e/tests/fixversion.e2e.test.ts) and keep the
initialization order (config must be set before projectKey) to ensure timeout
and project overrides are respected when using JiraE2EClient.
In `@src/EventManager.ts`:
- Around line 269-280: The function updateJiraFixVersion has a type mismatch: it
currently declares Promise<undefined[]> and builds applyIssueList with an async
.then() that yields undefined; change the return type to Promise<void[]> and the
collection type to Promise<void>[], and inside the loop push the promise
returned by issueObj.apply() directly (remove the async wrapper and await in the
.then() callback) so Issue.apply() (Promise<void>) is returned and
Promise.all(applyIssueList) resolves to void[]; reference updateJiraFixVersion,
Issue.build()/apply(), and getIssueSetFromString to locate the code.
♻️ Duplicate comments (4)
src/utils.ts (1)
95-106: Readability concern on De Morgan's law usage already flagged.The condition
!(str && Array.isArray(str))has been noted in previous review. The equivalent!str || !Array.isArray(str)is more readable.src/Jira.ts (1)
81-89: Telemetry setting concern was previously flagged.A previous review noted that
telemetry: falsewas removed from the Version2Client configuration. Verify if jira.js v5 has telemetry enabled by default.src/Issue.ts (1)
218-220: Error handling migration was previously flagged.A previous review noted the change from axios-specific error handling to the generic
isErrorcheck. Verify that jira.js v5 error responses are properly captured..github/workflows/ci.yml (1)
17-26: Same action-version verification as in unit-tests workflow.Please confirm the
@v6tags for checkout/setup-node exist; otherwise pin to the latest published major (see the earlier workflow comment).Also applies to: 47-65
🧹 Nitpick comments (23)
.vscode/settings.json (1)
1-29: VSCode settings still reference ESLint despite migration to Biome.The PR migrates linting from ESLint + Prettier to Biome, but this settings file still configures:
editor.defaultFormatter: "dbaeumer.vscode-eslint"(line 7)eslint.codeActionsOnSave(lines 14-16, 18)eslint.validate(line 10)Consider updating to use the Biome VSCode extension (
biomejs.biome) for consistency with the new tooling, or document why ESLint settings are intentionally retained.♻️ Suggested Biome-based settings
{ - "eslint.packageManager": "yarn", - "eslint.codeAction.showDocumentation": { - "enable": true - }, - - "editor.defaultFormatter": "dbaeumer.vscode-eslint", + "editor.defaultFormatter": "biomejs.biome", "editor.formatOnSave": true, - "eslint.alwaysShowStatus": true, - "eslint.validate": ["javascript", "typescript"], "[json]": { - "editor.defaultFormatter": "esbenp.prettier-vscode" + "editor.defaultFormatter": "biomejs.biome" }, - "editor.codeActionsOnSave": { - "source.fixAll.eslint": true - }, - "[ts]": { - "editor.codeActionsOnSave": "source.fixAll.eslint" - }, "javascript.format.semicolons": "ignore", "typescript.format.semicolons": "ignore", - "typescriptHero.imports.insertSemicolons": false, "typescript.preferences.quoteStyle": "single", - "javascript.preferences.quoteStyle": "single", - "vetur.format.defaultFormatterOptions": { - "prettier": { - "semi": false - } - } + "javascript.preferences.quoteStyle": "single" }src/fs-helper.ts (2)
92-104: Consider usingstats.isFile()for more precise file detection.Using
!stats.isDirectory()returnstruefor symlinks, sockets, FIFOs, and other special file types. If the intent is to check for regular files only, usestats.isFile().♻️ Suggested change
export function fileExistsSync(path: string): boolean { if (!path) { throw new Error(empty_path_error_msg); } if (existsSync(path)) { const stats = fs.statSync(path); - if (!stats.isDirectory()) { + if (stats.isFile()) { return true; } } return false; }
128-140: Inconsistent error message phrasing.The error message "file not there" is informal and inconsistent with other error messages in this file. Consider a more descriptive message.
♻️ Suggested change
- throw new Error(`Encountered an error when reading file '${path}': file not there`); + throw new Error(`File '${path}' does not exist`);.github/workflows/push_code_linting.yml (2)
16-16: Dropped--frozen-lockfilemay cause inconsistent CI builds.Removing
--frozen-lockfileallows yarn to modify the lockfile during CI, which can lead to non-reproducible builds and mask lockfile drift between local development and CI.Proposed fix
- - run: yarn install + - run: yarn install --frozen-lockfile
20-20: GitHub token reference uses lowercase.While GitHub Actions is case-insensitive for
secrets.GITHUB_TOKEN, the conventional and documented form is uppercase. This is minor but worth noting for consistency with GitHub documentation.Proposed fix
- github_token: ${{ secrets.github_token }} + github_token: ${{ secrets.GITHUB_TOKEN }}src/utils.ts (2)
1-18: Regex allows invalid Jira project keys starting with digits.The pattern
[\dA-Za-z]+permits project keys beginning with digits (e.g.,123-456), but Jira project keys must start with a letter. This could lead to false positives when extracting issue keys.Proposed fix to enforce letter-first project keys
-export const issueIdRegEx = /([\dA-Za-z]+-\d+)/g; +export const issueIdRegEx = /([A-Za-z][\dA-Za-z]*-\d+)/g;
132-142:formatDateuses local timezone, which may produce inconsistent results.The function uses
getMonth(),getDate(), andgetFullYear()which return values in the local timezone. When run in different environments (CI vs. local, different servers), the same timestamp could produce different date strings.Consider using UTC methods for consistent output:
Proposed fix using UTC methods
export function formatDate(date: string | number | Date): string { const d = new Date(date); - let month = `${d.getMonth() + 1}`; - let day = `${d.getDate()}`; - const year = d.getFullYear(); + let month = `${d.getUTCMonth() + 1}`; + let day = `${d.getUTCDate()}`; + const year = d.getUTCFullYear(); if (month.length < 2) month = `0${month}`; if (day.length < 2) day = `0${day}`; return [year, month, day].join('-'); }src/Jira.ts (2)
360-367: Redundant&& idcheck in condition.The
idvariable is already guaranteed to be truthy at this point since lines 361-363 throw an error if!id. The&& idcondition is redundant.Proposed fix
- if (versionsToCreate.length > 0 && id) { + if (versionsToCreate.length > 0) { const promArray: Promise<boolean>[] = versionsToCreate.map(async (fV) => this.createFixVersion(id, fV)); await Promise.all(promArray); }
269-287:createFixVersionalways returnstrueon success.The method returns
trueunconditionally after a successful API call. Consider returning the created version object or ID for potential future use, or simplify toPromise<void>if the return value isn't meaningful.e2e/docker/jira-dbconfig.xml (1)
8-12: Prefer non-root DB creds and environment-sourced secrets.Hard-coded root/123456 is risky even in dev/CI; consider a non-root user and sourcing credentials from env (e.g., compose env vars) to reduce accidental exposure.
__tests__/fs-helper.test.ts (1)
13-23: Consider sharing mock instances between default and named exports to prevent future divergenceThe mock currently creates separate
vi.fn()instances for named exports and the default export. While this works today sincefs-helper.tsonly uses named imports (import * as fsandimport { readFileSync }), sharing the same instances would prevent issues if the code is refactored to use default imports in the future.♻️ Suggested refactor
+const existsSyncMock = vi.fn(); +const statSyncMock = vi.fn(); +const readFileSyncMock = vi.fn(); + vi.mock('node:fs', () => ({ - existsSync: vi.fn(), - statSync: vi.fn(), - readFileSync: vi.fn(), + existsSync: existsSyncMock, + statSync: statSyncMock, + readFileSync: readFileSyncMock, default: { - existsSync: vi.fn(), - statSync: vi.fn(), - readFileSync: vi.fn(), + existsSync: existsSyncMock, + statSync: statSyncMock, + readFileSync: readFileSyncMock, }, }));.github/workflows/build-e2e-snapshots.yml (2)
8-13: Unusedforce_rebuildinput parameter.The
force_rebuildinput is defined but never referenced anywhere in the workflow steps. Either implement the conditional logic to skip rebuild when snapshots exist, or remove this unused input.♻️ Example implementation
- name: Check existing snapshots + if: inputs.force_rebuild != true env: GH_TOKEN: ${{ secrets.GITHUB_TOKEN }} run: | if gh release view ${{ env.SNAPSHOT_RELEASE_TAG }} &>/dev/null; then echo "Snapshots exist, skipping rebuild (use force_rebuild to override)" exit 0 fi
66-75: Potential race condition when stopping containers.The workflow stops containers in the "Save snapshots" step but then relies on the "Cleanup" step to run
yarn e2e:down. If the compose stop fails silently, the cleanup may encounter unexpected state. Consider adding error checking or ensuring idempotency.e2e/scripts/snapshot-save.ts (1)
157-172: Hardcoded version strings may drift from actual Docker images.The metadata contains hardcoded
jiraVersion: '9.17.5'andmysqlVersion: '8.0', but these values are defined separately in the workflow file and Docker Compose. Consider extracting these to a shared config or deriving them from the running containers.__tests__/EventManager.test.ts (1)
499-548: Consider adding error propagation test forupdateJiraFixVersion().The tests verify successful execution and empty results, but don't test error handling when
Issue.build()orIssue.apply()throws. Given thatfailOnErroris a configuration option, testing both error modes would improve confidence.💡 Suggested test case
it('should propagate error when Issue.apply() fails and failOnError is true', async () => { // Arrange: Mock Issue to throw vi.doMock('../src/Issue', () => ({ default: class MockIssue { async build() { return this; } async apply() { throw new Error('Jira API error'); } }, })); const argv: Args = { ...baseArgv, issues: 'PROJ-123', failOnError: true, }; const manager = new EventManager(mockContext, mockJira, argv); // Act & Assert await expect(manager.updateJiraFixVersion()).rejects.toThrow('Jira API error'); });src/action.ts (1)
105-117: Consider logging non-Error exceptions.The error handler only logs when
isError(error)is true. If a non-Error value is thrown (e.g., a string or object), it will be rethrown without logging, making debugging harder.♻️ Suggested improvement
} catch (error) { if (isError(error)) { core.error(error); + } else { + core.error(`Unknown error: ${String(error)}`); } throw error; }e2e/scripts/jira-client.ts (1)
161-167: Type assertionas anybypasses type safety.The
as anycast when callingcreateProjectfor Cloud hides potential type mismatches. Consider using a more specific type or extending the jira.js types if the library's typings are incomplete.__tests__/index.test.ts (2)
42-173: Mock duplication between inline mock andjira-api.ts.The inline
MockJiraclass duplicates logic that also exists in__tests__/mocks/jira-api.ts. While this is necessary due to Vitest's hoisting behavior, consider extracting shared mock data/logic to a common location that both can reference, reducing maintenance burden.
305-324: Test validates graceful handling but doesn't verify behavior.The test
'handles missing fix_versions gracefully'only checks thatfixVersionsis defined, but doesn't verify the action's behavior when fixVersions is empty. Consider asserting on the actual outcome (e.g., that no Jira updates are attempted).💡 Suggested enhancement
const settings: Args = inputHelper.getInputs(); // When fix_versions input is empty/undefined, it returns [''] due to split behavior // This is expected behavior - the action handles empty strings in fixVersions array expect(settings.fixVersions).toBeDefined(); + + // Verify action completes without attempting updates + const action = new Action(github.context, settings); + const result = await action.execute(); + expect(result).toEqual(true); + // Verify no fix versions were added to the issue + expect(mockIssueTest.fields.fixVersions).toEqual([]);.github/workflows/e2e-jira.yml (1)
143-146: Consider using email format for JIRA_USER_EMAIL.The
JIRA_USER_EMAILis set toadmin, but Jira Cloud typically expects an email address for basic authentication. Since this is for a local Docker Jira Server instance where username authentication works, this is acceptable, but the variable name may be misleading.Consider renaming to
JIRA_USERNAMEfor clarity, or add a comment explaining this is for Jira Server/Data Center authentication.src/EventManager.ts (1)
10-17: Consider handling the 'NO_TOKEN' fallback more explicitly.The token resolution falls back to
'NO_TOKEN'string when no token is available. Downstream code should handle this sentinel value appropriately to avoid confusing API authentication failures.Consider throwing an error or using
undefinedinstead of a magic string, which would make authentication failures more explicit:export const token = core.getInput('token') || process.env.GITHUB_TOKEN || undefined;__tests__/mocks/jira-api.ts (2)
28-28: Unused parameterbaseUrlinsetupDefaultMockData.The
baseUrlparameter is only used for constructing theselfURL in mock issue objects. Consider documenting this or making it optional with a default value.♻️ Minor improvement
-export function setupDefaultMockData(baseUrl: string): void { +export function setupDefaultMockData(baseUrl = 'http://localhost:8080'): void {
104-120: Auto-generating issues for unknown keys may hide test bugs.The
getIssuemock automatically generates a default response for any issue key not in the mock data. While this adds flexibility, it could mask typos or incorrect issue keys in tests, making failures harder to diagnose.Consider adding a strict mode or logging when returning auto-generated issues:
♻️ Proposed improvement
getIssue: async (issueId: string) => { const issue = mockJiraData.issues.get(issueId); if (issue) { return issue; } // Generate a default response for any issue + console.warn(`[mock] Auto-generating issue for unknown key: ${issueId}`); const projectKey = issueId.split('-')[0]; return {
| permissions: | ||
| contents: read | ||
| pull-requests: read | ||
| issues: read | ||
| id-token: write |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🌐 Web query:
anthropics/claude-code-action GitHub permissions pull-requests write
💡 Result:
To use anthropics/claude-code-action with the ability to create/update PRs, you need pull-requests: write on the workflow token, and Pull requests: Read & write on the GitHub App.
Workflow (permissions: block)
permissions:
contents: write
pull-requests: write
issues: write
# plus id-token: write if you’re using OIDC (e.g., Bedrock/Vertex)
id-token: write[1][2]
GitHub App repository permissions
- Pull requests: Read & write (in addition to Contents and Issues read/write) [2]
Sources: [1] shows the recommended workflow permissions: including pull-requests: write; [2] states the GitHub App must request Pull requests: Read & write.
Change pull-requests: read to pull-requests: write to allow the Claude Code Review action to post comments on PRs.
The anthropics/claude-code-action requires pull-requests: write permission to create and update review comments. Additionally, consider updating contents: write and issues: write to match the action's recommended permissions.
Recommended permissions:
permissions:
contents: write
pull-requests: write
issues: write
id-token: write🤖 Prompt for AI Agents
In @.github/workflows/claude-code-review.yml around lines 22 - 26, Update the
GitHub Actions permissions block so the Claude Code Review action can post PR
comments: change the pull-requests permission from read to write (modify the
pull-requests key) and consider updating contents and issues to write as
recommended (update contents and issues keys), keeping id-token: write as-is;
ensure the permissions section contains pull-requests: write, contents: write,
issues: write, and id-token: write.
| permissions: | ||
| contents: read | ||
| pull-requests: read | ||
| issues: read | ||
| id-token: write | ||
| actions: read # Required for Claude to read CI results on PRs |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Likely needs write permissions for pull-requests and issues.
If Claude responds to @claude mentions by posting comments, it needs write access. The current read permissions would prevent Claude from replying. This is the same concern as in claude-code-review.yml.
🔧 Suggested permission update
permissions:
contents: read
- pull-requests: read
- issues: read
+ pull-requests: write
+ issues: write
id-token: write
actions: read # Required for Claude to read CI results on PRs🤖 Prompt for AI Agents
In @.github/workflows/claude.yml around lines 21 - 26, Update the workflow
permissions so Claude can post comments: change the permissions block keys
"pull-requests" and "issues" from read to write (keep "contents: read",
"id-token: write", and "actions: read" as-is) so the "permissions" section
grants the workflow the ability to create/update PR and issue comments.
| steps: | ||
| - name: Release GitHub Actions | ||
| uses: technote-space/release-github-actions@v7.0.5 | ||
| uses: technote-space/release-github-actions@v8.0.3 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🌐 Web query:
technote-space release-github-actions v8 changelog breaking changes
💡 Result:
I couldn’t reliably find a published v8 changelog / “breaking changes” section for technote-space/release-github-actions in the sources I can access right now.
What I can confirm from package metadata is:
- The npm package
@technote-space/release-github-actionsexists and the latest v8 release shown there is 8.0.3 (published “3 years ago”). [1] - The package points to the GitHub repo
github.com/technote-space/release-github-actionsas its homepage/repository. [1] - Socket’s package page lists the same package and versions (including the v8 line), but it doesn’t expose a clear v8 changelog/breaking-changes entry in the rendered content I could retrieve. [2]
If you paste the GitHub Releases page (or the text of the v8.0.0 release notes / CHANGELOG entry), I can extract the exact breaking changes and summarize them.
Sources: [1] [2]
🌐 Web query:
site:github.com technote-space/release-github-actions releases v8.0.0
💡 Result:
In technote-space/release-github-actions, v8.0.0 corresponds to the merged PR “release: v8.0.0” (#437), merged into main on Oct 14, 2022. The PR notes include a BREAKING CHANGE (“change action default value”) along with dependency updates and related changes. [1][2]
🌐 Web query:
technote-space/release-github-actions v8.0.0 "default value" breaking change migration
💡 Result:
In technote-space/release-github-actions v8.0.0, the breaking change labeled “change action default value” is that the default CLEAN_TARGETS value was expanded (so the action deletes more build artifacts by default). [1]
What changed (default value)
CLEAN_TARGETS default changed from:
.[!.]*,__tests__,docs,src,*.js,*.ts,*.json,*.lock,*.yml,*.yaml
to:
.[!.]*,__tests__,docs,src,*.[jt]s,*.[mc][jt]s,*.json,*.lock,*.yml,*.yaml[2]
That new glob set includes .mjs/.cjs/.mts/.cts (and similar), so those files may now be cleaned by default. [2]
Migration
If you relied on the old behavior, explicitly set CLEAN_TARGETS in your workflow to the previous default (or to your own allowlist), e.g.:
- uses: technote-space/release-github-actions@v8
with:
CLEAN_TARGETS: '.[!.]*,__tests__,docs,src,*.js,*.ts,*.json,*.lock,*.yml,*.yaml'
``` [2]
**Sources:** [1] https://github.com/technote-space/release-github-actions/pull/443 [2] https://github.com/technote-space/release-github-actions/pull/443/files
---
🏁 Script executed:
```shell
cat -n .github/workflows/publish_action.ymlRepository: bitflight-devops/github-action-jira-issues-fixversion-manager
Length of output: 444
Explicitly configure CLEAN_TARGETS or verify the expanded default cleanup behavior is acceptable for this release workflow.
The upgrade from v7 to v8 includes a breaking change: the CLEAN_TARGETS default value was expanded to include .mjs, .cjs, .mts, and .cts files. The workflow does not override this setting, so it now uses the new expanded defaults. For a release workflow, verify this broader cleanup behavior doesn't remove artifacts needed during the release process, or explicitly set CLEAN_TARGETS to the previous default if necessary.
🤖 Prompt for AI Agents
In @.github/workflows/publish_action.yml at line 14, The release action now uses
technote-space/release-github-actions@v8.0.3 which changed the CLEAN_TARGETS
default to also remove .mjs/.cjs/.mts/.cts; explicitly set the CLEAN_TARGETS
input for the action (or confirm the expanded list is acceptable) to avoid
inadvertently deleting release artifacts — edit the step that references uses:
technote-space/release-github-actions@v8.0.3 and add a CLEAN_TARGETS input with
the previous/default pattern you want (or a whitelist of safe extensions) so
cleanup behavior is deterministic.
| // Try to get logs | ||
| try { | ||
| const { data: logs } = await github.rest.actions.downloadJobLogsForWorkflowRun({ | ||
| owner: context.repo.owner, | ||
| repo: context.repo.repo, | ||
| job_id: job.id | ||
| }); | ||
|
|
||
| // Extract last 30 lines | ||
| const logLines = logs.split('\n'); | ||
| const last30Lines = logLines.slice(-30).join('\n'); | ||
| jobInfo.logs = last30Lines; | ||
| } catch (logError) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# First, let's examine the file to understand the context
cat -n ".github/workflows/respond-to-ci-failures.yml" | sed -n '120,155p'Repository: bitflight-devops/github-action-jira-issues-fixversion-manager
Length of output: 1904
🌐 Web query:
GitHub Actions API downloadJobLogsForWorkflowRun return type documentation
💡 Result:
downloadJobLogsForWorkflowRun (Octokit) maps to the GitHub REST endpoint:
- GET
/repos/{owner}/{repo}/actions/jobs/{job_id}/logs - Return/response:
302 Found(redirect), not JSON. The response body is effectively empty; the download URL is in theLocationresponse header and expires after ~1 minute. The URL points to a plain-text log file. [1][2]
So in Octokit/TypeScript terms, expect a normal Octokit response object where the useful “return value” is response.headers.location (and response.status === 302), not response.data. [2]
Sources: [1] GitHub REST “Download job logs for a workflow run”, [2] Octokit REST docs for downloadJobLogsForWorkflowRun, [3] StackOverflow confirmation of 302 + Location behavior.
Handle API redirect for job logs correctly.
downloadJobLogsForWorkflowRun returns a 302 redirect with the download URL in the Location response header, not JSON data. The current code tries to split an undefined or empty data property and will fail. Either follow the redirect by fetching the URL from response.headers.location, or skip embedding logs and link to the job logs instead.
🤖 Prompt for AI Agents
In @.github/workflows/respond-to-ci-failures.yml around lines 133 - 145, The
code incorrectly assumes github.rest.actions.downloadJobLogsForWorkflowRun
returns JSON in `data`; instead capture the full response and handle the 302
redirect by reading the `response.headers.location` URL, then fetch that URL
(using octokit.request or fetch) to download the log archive and extract or read
its content before splitting into lines to populate `jobInfo.logs`;
alternatively, if you don't want to follow redirects, set `jobInfo.logs` to a
URL string pointing to the job logs using the `response.headers.location` value
(or the standard GitHub Actions job logs URL) and avoid calling `.split` on
undefined — update the try block that calls
`github.rest.actions.downloadJobLogsForWorkflowRun` and the error handler
`logError` accordingly.
| const prNumber = ${{ steps.find-pr.outputs.pr_number }}; | ||
| const checkSuiteId = context.payload.check_suite.id; | ||
| const failures = JSON.parse('${{ steps.get-failures.outputs.failures }}'); | ||
| const existingCommentId = '${{ steps.check-comment.outputs.comment_id }}'; | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
cat -n .github/workflows/respond-to-ci-failures.yml | sed -n '195,210p'Repository: bitflight-devops/github-action-jira-issues-fixversion-manager
Length of output: 833
🌐 Web query:
GitHub Actions fromJSON vs environment variables safe output parsing
💡 Result:
Key differences: fromJSON() vs env vars
- Environment variables are always strings in workflow expression contexts (
env.FOOis astring). If you want an object/array (e.g., for a matrix), you must encode as JSON string and then decode withfromJSON(). [5] fromJSON(value)converts a valid JSON string into a JSON data type (object/array/bool/null/number/string) usable by expressions (including matrices). GitHub’s docs explicitly show passing JSON via a job output and then usingfromJSON()in the next job. [1]
The “safe output parsing” pattern (recommended)
- Generate JSON in a step (ideally with a real JSON tool like
jq -cto guarantee valid JSON). - Write it to
$GITHUB_OUTPUT(step output), then expose it as a job output. - Consume it with
fromJSON(needs.job.outputs.foo).
This matches GitHub’s intended mechanism for “JSON -> matrix” handoff. [1]
What to avoid (common pitfalls / security issues)
- Don’t build env vars or outputs from untrusted content without controls. Writing untrusted text into environment files can allow newline/delimiter injection that creates extra variables (environment-variable injection). This is a known class of workflow vulnerability. [6][7][2]
- Don’t inline untrusted contexts directly inside
run:via${{ }}(macro expansion can turn data into executable shell text). Prefer putting the value inenv:and referencing it as a normal shell variable. [3][4]
If your JSON must be multi-line
- GitHub supports multiline values via the environment-file delimiter technique; outputs written through
$GITHUB_OUTPUTcan also be multiline using the same approach. If you use multiline, ensure the delimiter is unique and cannot appear in the content. [5][6]
Practical guidance
- If the goal is expression-time structured data (matrix, conditionals,
strategy, reusable-workflow inputs): use outputs +fromJSON()(not env vars). [1][5] - If the goal is runtime consumption inside a script: env vars are fine, but treat them as untrusted strings and avoid writing them back into
$GITHUB_ENV/$GITHUB_OUTPUTwithout validation/escaping (especially stripping/controlling newlines). [2][6][7]
Sources: GitHub Docs on fromJSON usage and JSON matrix pattern [1], workflow command/output file syntax [5], env context typing [5], GitHub guidance on script-injection risks and safer env: usage [3][4], and security references on environment-file injection risk and mitigations [2][6][7].
🌐 Web query:
github-script output parsing special characters newlines injection security
💡 Result:
- Treat any value coming from the event payload as untrusted (issue/PR titles & bodies, branch names, etc.). If you interpolate them into
run:scripts (Bash/PowerShell), you can get command/script injection on the runner. Prefer passing untrusted strings via environment variables and handling them safely inside the script. [1], [8] - Outputs and “workflow command” parsing are sensitive to special characters/newlines. Historically
::set-output/workflow-command style parsing has had injection footguns; modern Actions uses the$GITHUB_OUTPUTfile for step outputs. For multiline values, use the delimiter (“heredoc”) form, but only if you can guarantee the delimiter won’t appear in the value. If the value is arbitrary/untrusted, write it to a file instead of an output/env var. [5] - If you must move arbitrary text (including newlines) through outputs, a robust pattern is serialize to JSON (so newlines/special chars become escapes) and later
fromJSONwhen consuming in expressions, rather than trying to “escape” newlines manually. [3] - Real-world impact is runner takeover and secret exfiltration when untrusted strings reach shells/commands unsafely (multiple public GHSL advisories show trivial payloads like
"; echo ...;#"working). [2], [7]
Sources: GitHub Docs on script injections and workflow commands/multiline handling [1], [5]; GitHub blog guidance [8]; GitHub Security Lab advisories showing practical command injection in workflows [2], [7]; reliable multiline output discussion via JSON [3].
Avoid parsing outputs via quoted string interpolation in JSON.parse().
Outputs can contain newlines and quotes that will break the string literal and cause JSON parsing errors. Use an environment variable to safely pass the output to the script.
Safer parsing
- name: Post or update failure comment
if: steps.verify-author.outputs.is_copilot == 'true'
uses: actions/github-script@v8
with:
script: |
const prNumber = ${{ steps.find-pr.outputs.pr_number }};
const checkSuiteId = context.payload.check_suite.id;
- const failures = JSON.parse('${{ steps.get-failures.outputs.failures }}');
+ const failures = JSON.parse(process.env.FAILURES || '[]');
const existingCommentId = '${{ steps.check-comment.outputs.comment_id }}';
+ env:
+ FAILURES: ${{ steps.get-failures.outputs.failures }}🤖 Prompt for AI Agents
In @.github/workflows/respond-to-ci-failures.yml around lines 201 - 205, The
step currently embeds outputs directly into a quoted string for JSON.parse (e.g.
JSON.parse('${{ steps.get-failures.outputs.failures }}')), which breaks when
outputs contain newlines/quotes; instead export the outputs as environment
variables in the workflow step (e.g. env: FAILURES: ${{
steps.get-failures.outputs.failures }} and optionally EXISTING_COMMENT_ID: ${{
steps.check-comment.outputs.comment_id }}), then in the script read and parse
them from process.env (e.g. JSON.parse(process.env.FAILURES) and use
process.env.EXISTING_COMMENT_ID) or use shell-safe parsing (echo "$FAILURES" |
jq), replacing occurrences of failures and existingCommentId in the code
accordingly.
| function removeVolume(volumeName: string): boolean { | ||
| try { | ||
| execSync(`docker volume rm ${volumeName}`, { stdio: 'pipe' }); | ||
| return true; | ||
| } catch { | ||
| return false; | ||
| } | ||
| } | ||
|
|
||
| /** | ||
| * Creates a new Docker volume | ||
| * @param volumeName - Name of the Docker volume to create | ||
| * @returns True if creation succeeded, false otherwise | ||
| */ | ||
| function createVolume(volumeName: string): boolean { | ||
| try { | ||
| execSync(`docker volume create ${volumeName}`, { stdio: 'pipe' }); | ||
| return true; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid shell interpolation for docker commands (injection + path safety).
execSync with interpolated strings (volume names, snapshot paths) can be abused via crafted metadata or break on spaces. Use spawnSync/execFileSync with argv to avoid shell parsing.
🔒 Suggested fix
function removeVolume(volumeName: string): boolean {
try {
- execSync(`docker volume rm ${volumeName}`, { stdio: 'pipe' });
- return true;
+ const result = spawnSync('docker', ['volume', 'rm', volumeName], { stdio: 'pipe' });
+ return result.status === 0;
} catch {
return false;
}
}
function createVolume(volumeName: string): boolean {
try {
- execSync(`docker volume create ${volumeName}`, { stdio: 'pipe' });
- return true;
+ const result = spawnSync('docker', ['volume', 'create', volumeName], { stdio: 'pipe' });
+ return result.status === 0;
} catch {
return false;
}
}
function restoreVolume(volumeName: string, snapshotPath: string): boolean {
console.log(`Restoring volume ${volumeName} from ${path.basename(snapshotPath)}...`);
try {
- const cmd = [
- 'docker',
- 'run',
- '--rm',
- '-v',
- `${volumeName}:/data`,
- '-v',
- `${path.dirname(snapshotPath)}:/backup:ro`,
- 'alpine',
- 'sh',
- '-c',
- `"rm -rf /data/* /data/..?* /data/.[!.]* 2>/dev/null; tar xzf /backup/${path.basename(snapshotPath)} -C /data"`,
- ];
-
- execSync(cmd.join(' '), { stdio: 'inherit' });
+ const cmd = [
+ 'run',
+ '--rm',
+ '-v',
+ `${volumeName}:/data`,
+ '-v',
+ `${path.dirname(snapshotPath)}:/backup:ro`,
+ 'alpine',
+ 'sh',
+ '-c',
+ `rm -rf /data/* /data/..?* /data/.[!.]* 2>/dev/null; tar xzf "/backup/${path.basename(snapshotPath)}" -C /data`,
+ ];
+
+ const result = spawnSync('docker', cmd, { stdio: 'inherit' });
+ if (result.status !== 0) {
+ throw new Error(`docker run failed with status ${result.status}`);
+ }Also applies to: 187-206
🤖 Prompt for AI Agents
In `@e2e/scripts/snapshot-restore.ts` around lines 111 - 128, The functions
removeVolume and createVolume (and the later code that runs docker commands
around snapshot paths) currently use execSync with interpolated shell strings
which is unsafe for injection and spaces; replace execSync usage with
child_process.spawnSync or execFileSync and pass docker and its arguments as an
argv array (e.g., ['volume','rm', volumeName] or ['run', ... , snapshotPath]) so
arguments are not shell-parsed, propagate/return success based on the spawned
process exitCode and include stderr in logs as needed, and ensure any snapshot
path usage also uses argv to avoid shell interpolation.
| function saveVolume(volumeName: string, outputPath: string): boolean { | ||
| console.log(`Saving volume ${volumeName} to ${path.basename(outputPath)}...`); | ||
|
|
||
| try { | ||
| // Use alpine container to tar the volume contents | ||
| const cmd = [ | ||
| 'docker', | ||
| 'run', | ||
| '--rm', | ||
| '-v', | ||
| `${volumeName}:/data:ro`, | ||
| '-v', | ||
| `${path.dirname(outputPath)}:/backup`, | ||
| 'alpine', | ||
| 'tar', | ||
| 'czf', | ||
| `/backup/${path.basename(outputPath)}`, | ||
| '-C', | ||
| '/data', | ||
| '.', | ||
| ]; | ||
|
|
||
| execSync(cmd.join(' '), { stdio: 'inherit' }); | ||
|
|
||
| // Verify the file was created | ||
| if (fs.existsSync(outputPath)) { | ||
| const stats = fs.statSync(outputPath); | ||
| const sizeMB = (stats.size / (1024 * 1024)).toFixed(2); | ||
| console.log(` Saved: ${sizeMB} MB`); | ||
| return true; | ||
| } | ||
|
|
||
| return false; | ||
| } catch (error) { | ||
| console.error(` Failed to save volume: ${error}`); | ||
| return false; | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential command injection via shell string interpolation.
Using execSync(cmd.join(' '), ...) with paths that may contain special characters (spaces, quotes, etc.) could lead to unexpected behavior or command injection. While volume names are typically controlled, the outputPath comes from user input via --output-dir.
🔒 Safer approach using spawnSync
- execSync(cmd.join(' '), { stdio: 'inherit' });
+ const result = spawnSync('docker', [
+ 'run',
+ '--rm',
+ '-v', `${volumeName}:/data:ro`,
+ '-v', `${path.dirname(outputPath)}:/backup`,
+ 'alpine',
+ 'tar', 'czf', `/backup/${path.basename(outputPath)}`, '-C', '/data', '.',
+ ], { stdio: 'inherit' });
+
+ if (result.status !== 0) {
+ throw new Error(`Docker tar command failed with status ${result.status}`);
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| function saveVolume(volumeName: string, outputPath: string): boolean { | |
| console.log(`Saving volume ${volumeName} to ${path.basename(outputPath)}...`); | |
| try { | |
| // Use alpine container to tar the volume contents | |
| const cmd = [ | |
| 'docker', | |
| 'run', | |
| '--rm', | |
| '-v', | |
| `${volumeName}:/data:ro`, | |
| '-v', | |
| `${path.dirname(outputPath)}:/backup`, | |
| 'alpine', | |
| 'tar', | |
| 'czf', | |
| `/backup/${path.basename(outputPath)}`, | |
| '-C', | |
| '/data', | |
| '.', | |
| ]; | |
| execSync(cmd.join(' '), { stdio: 'inherit' }); | |
| // Verify the file was created | |
| if (fs.existsSync(outputPath)) { | |
| const stats = fs.statSync(outputPath); | |
| const sizeMB = (stats.size / (1024 * 1024)).toFixed(2); | |
| console.log(` Saved: ${sizeMB} MB`); | |
| return true; | |
| } | |
| return false; | |
| } catch (error) { | |
| console.error(` Failed to save volume: ${error}`); | |
| return false; | |
| } | |
| } | |
| function saveVolume(volumeName: string, outputPath: string): boolean { | |
| console.log(`Saving volume ${volumeName} to ${path.basename(outputPath)}...`); | |
| try { | |
| // Use alpine container to tar the volume contents | |
| const cmd = [ | |
| 'docker', | |
| 'run', | |
| '--rm', | |
| '-v', | |
| `${volumeName}:/data:ro`, | |
| '-v', | |
| `${path.dirname(outputPath)}:/backup`, | |
| 'alpine', | |
| 'tar', | |
| 'czf', | |
| `/backup/${path.basename(outputPath)}`, | |
| '-C', | |
| '/data', | |
| '.', | |
| ]; | |
| const result = spawnSync('docker', [ | |
| 'run', | |
| '--rm', | |
| '-v', `${volumeName}:/data:ro`, | |
| '-v', `${path.dirname(outputPath)}:/backup`, | |
| 'alpine', | |
| 'tar', 'czf', `/backup/${path.basename(outputPath)}`, '-C', '/data', '.', | |
| ], { stdio: 'inherit' }); | |
| if (result.status !== 0) { | |
| throw new Error(`Docker tar command failed with status ${result.status}`); | |
| } | |
| // Verify the file was created | |
| if (fs.existsSync(outputPath)) { | |
| const stats = fs.statSync(outputPath); | |
| const sizeMB = (stats.size / (1024 * 1024)).toFixed(2); | |
| console.log(` Saved: ${sizeMB} MB`); | |
| return true; | |
| } | |
| return false; | |
| } catch (error) { | |
| console.error(` Failed to save volume: ${error}`); | |
| return false; | |
| } | |
| } |
🤖 Prompt for AI Agents
In `@e2e/scripts/snapshot-save.ts` around lines 111 - 148, The saveVolume function
builds a shell command string and calls execSync(cmd.join(' ')) which is
vulnerable to injection via outputPath or volumeName; change this to use a safe
child process API (e.g., spawnSync or execFileSync) that accepts an argv array
instead of a single shell string (keep the argv elements that are currently in
cmd but pass them as separate args), ensure you do not pass { shell: true }, and
construct the volume and backup paths with path.basename/path.dirname only (or
validate/sanitize outputPath) before using them; update error handling in
saveVolume to capture spawnSync/execFileSync stderr/stdout for logging and still
return false on failure.
| if (require.main === module) { | ||
| waitForJira().catch((error) => { | ||
| console.error('Failed to wait for Jira:', error); | ||
| process.exit(1); | ||
| }); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
require.main === module is incompatible with ESM modules.
The tsconfig.json shows "module": "ESNext", which produces ES modules. The require.main === module pattern only works with CommonJS. For ESM, use import.meta.url comparison or check for a CLI entry differently.
🔧 Proposed fix for ESM compatibility
-// Run if called directly
-if (require.main === module) {
- waitForJira().catch((error) => {
- console.error('Failed to wait for Jira:', error);
- process.exit(1);
- });
-}
+// Run if called directly
+import { fileURLToPath } from 'node:url';
+
+const isMainModule = import.meta.url === `file://${process.argv[1]}` ||
+ fileURLToPath(import.meta.url) === process.argv[1];
+
+if (isMainModule) {
+ waitForJira().catch((error) => {
+ console.error('Failed to wait for Jira:', error);
+ process.exit(1);
+ });
+}Alternatively, add the import at the top of the file.
🤖 Prompt for AI Agents
In `@e2e/scripts/wait-for-jira.ts` around lines 106 - 111, The ESM-incompatible
check using "require.main === module" should be replaced with an
import.meta.url-based entry check: add "fileURLToPath" import from "url" at the
top, then replace the require.main block with a runtime comparison using
fileURLToPath(import.meta.url) === process.argv[1] and call
waitForJira().catch(...) inside that branch; reference symbols: waitForJira,
require.main === module (remove), import.meta.url, fileURLToPath, and
process.argv.
| // Initialize config immediately so timeout values are available for test definitions | ||
| const config = getE2EConfig(); | ||
| let client: JiraE2EClient; | ||
| const projectKey = 'E2E'; | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid hard-coding the project key
Several tests use the constant 'E2E', which ignores config.test.projectKey. If config overrides are used, tests will target the wrong project.
🐛 Proposed fix
- const projectKey = 'E2E';
+ const projectKey = config.test.projectKey;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Initialize config immediately so timeout values are available for test definitions | |
| const config = getE2EConfig(); | |
| let client: JiraE2EClient; | |
| const projectKey = 'E2E'; | |
| // Initialize config immediately so timeout values are available for test definitions | |
| const config = getE2EConfig(); | |
| let client: JiraE2EClient; | |
| const projectKey = config.test.projectKey; |
🤖 Prompt for AI Agents
In `@e2e/tests/fixversion.e2e.test.ts` around lines 9 - 13, The tests hard-code
projectKey as 'E2E' instead of using the test configuration; change the
declaration of the projectKey constant to read from the initialized config (from
getE2EConfig()) so tests use config.test.projectKey at runtime; update any
references to the projectKey variable in this file
(e2e/tests/fixversion.e2e.test.ts) and keep the initialization order (config
must be set before projectKey) to ensure timeout and project overrides are
respected when using JiraE2EClient.
src/EventManager.ts
Outdated
| async updateJiraFixVersion(): Promise<undefined[]> { | ||
| const issues = this.getIssueSetFromString(this.argv.issues); | ||
| const applyIssueList: Promise<void>[] = []; | ||
| const applyIssueList: Promise<undefined>[] = []; | ||
| for (const issueKey of issues) { | ||
| applyIssueList.push(new Issue(issueKey, this.jira, this.argv).build().then(async (issueObj) => issueObj.apply())); | ||
| applyIssueList.push( | ||
| new Issue(issueKey, this.jira, this.argv).build().then(async (issueObj) => { | ||
| await issueObj.apply(); | ||
| }), | ||
| ); | ||
| } | ||
| return Promise.all(applyIssueList); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix type mismatch causing pipeline failure.
The pipeline fails because Issue.apply() returns Promise<void>, but the .then() callback implicitly returns undefined when no explicit return is present. This creates Promise<void | undefined> which doesn't match Promise<undefined>.
The idiomatic fix is to use Promise<void[]> as the return type and simplify the callback:
🐛 Proposed fix
- async updateJiraFixVersion(): Promise<undefined[]> {
+ async updateJiraFixVersion(): Promise<void[]> {
const issues = this.getIssueSetFromString(this.argv.issues);
- const applyIssueList: Promise<undefined>[] = [];
+ const applyIssueList: Promise<void>[] = [];
for (const issueKey of issues) {
applyIssueList.push(
- new Issue(issueKey, this.jira, this.argv).build().then(async (issueObj) => {
- await issueObj.apply();
- }),
+ new Issue(issueKey, this.jira, this.argv).build().then((issueObj) => issueObj.apply()),
);
}
return Promise.all(applyIssueList);
}This simplification:
- Uses the more semantic
void[]return type - Removes the unnecessary
asyncwrapper in the.then()callback - Directly returns the promise from
apply()instead of awaiting and discarding it
🧰 Tools
🪛 GitHub Actions: CI
[error] 274-274: TS2345: Argument of type 'Promise<void | undefined>' is not assignable to parameter of type 'Promise'.
🪛 GitHub Check: Validate
[failure] 274-274:
Argument of type 'Promise<void | undefined>' is not assignable to parameter of type 'Promise'.
🤖 Prompt for AI Agents
In `@src/EventManager.ts` around lines 269 - 280, The function
updateJiraFixVersion has a type mismatch: it currently declares
Promise<undefined[]> and builds applyIssueList with an async .then() that yields
undefined; change the return type to Promise<void[]> and the collection type to
Promise<void>[], and inside the loop push the promise returned by
issueObj.apply() directly (remove the async wrapper and await in the .then()
callback) so Issue.apply() (Promise<void>) is returned and
Promise.all(applyIssueList) resolves to void[]; reference updateJiraFixVersion,
Issue.build()/apply(), and getIssueSetFromString to locate the code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/EventManager.ts (1)
172-174: Minor typo in debug message.The debug message contains a grammatical error: "because there its part of" should be "because it's part of".
📝 Proposed fix
- core.debug(`${issueKey} is included because there its part of the specific project filter`); + core.debug(`${issueKey} is included because it's part of the specific project filter`);
♻️ Duplicate comments (1)
src/EventManager.ts (1)
273-277: Simplify the.then()callback by removing unnecessaryasync/await.The
asyncwrapper and explicitawaitare unnecessary here. SinceissueObj.apply()returns aPromise<void>, returning it directly from.then()will produce the same result more concisely.♻️ Proposed simplification
applyIssueList.push( - new Issue(issueKey, this.jira, this.argv).build().then(async (issueObj) => { - await issueObj.apply(); - }), + new Issue(issueKey, this.jira, this.argv).build().then((issueObj) => issueObj.apply()), );
🧹 Nitpick comments (1)
src/EventManager.ts (1)
17-17: Consider logging a warning when falling back to 'NO_TOKEN'.The silent fallback to
'NO_TOKEN'may mask configuration errors, leading to confusing downstream authentication failures. Consider adding acore.warning()when the fallback is used, so users are aware that no valid token was provided.♻️ Suggested improvement
-export const token = core.getInput('token') || process.env.GITHUB_TOKEN || 'NO_TOKEN'; +const inputToken = core.getInput('token') || process.env.GITHUB_TOKEN; +export const token = inputToken || 'NO_TOKEN'; +if (!inputToken) { + core.warning('No GitHub token provided via input or GITHUB_TOKEN env var; using fallback.'); +}
- Remove redundant !**/e2e/**/*.js pattern from biome.json - Use more readable condition in utils.ts (!str || !Array.isArray) - Note: telemetry option removed in jira.js v5, not applicable
Summary
Complete modernization of the repository to match the patterns established in
github-action-jira-transition-manager.Changes
Test Plan
Summary by CodeRabbit
New Features
Testing & Infrastructure
Developer Tooling
Documentation
✏️ Tip: You can customize this high-level summary in your review settings.