Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -75,6 +75,8 @@ _Actions for managing releases._

#### - [Create](actions/release/create/README.md)

#### - [Summarize changelog](actions/release/summarize-changelog/README.md)

### Workflow

_Actions for managing workflows._
Expand Down
62 changes: 62 additions & 0 deletions actions/release/summarize-changelog/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
# GitHub Action: Release - Summarize Changelog

## Overview

Compile a release changelog from all commits between two refs.

Features:

- Supports conventional commit grouping.
- Supports configurable Markdown templates.
- Can inject a summary generated by any LLM provider (`llm-summary` input).
- Can generate summary through LangChain using multiple providers (`openai`, `anthropic`, `google-genai`).
- Can generate the summary from the produced `llm-prompt` via a custom command (`llm-summary-command` input).
- Exposes a provider-agnostic `llm-prompt` output to integrate with any LLM action.

## Usage

```yaml
- id: changelog
uses: hoverkraft-tech/ci-github-publish/actions/release/summarize-changelog@main
with:
base-ref: v1.2.0
head-ref: HEAD
conventional-commits: "true"
llm-summary: ""
llm-provider: "openai"
llm-model: "gpt-4o-mini"
llm-api-key: ${{ secrets.OPENAI_API_KEY }}
llm-base-url: "https://api.openai.com/v1"
llm-summary-command: ""
markdown-template: |
## Release notes
Range: `{{base_ref}}..{{head_ref}}`

{{summary}}

{{changes}}
```

## Inputs

| Input | Description | Required | Default |
| ---------------------- | ---------------------------------------------------------------------------------- | -------- | --------------------------- |
| `base-ref` | Base Git ref (excluded from the range). | true | - |
| `head-ref` | Head Git ref (included in the range). | true | - |
| `conventional-commits` | Group commit messages by conventional commit type. | false | `true` |
| `llm-summary` | Optional summary generated by any LLM provider. | false | `""` |
| `llm-provider` | LLM provider used with LangChain (`openai`, `anthropic`, `google-genai`). | false | `openai` |
| `llm-model` | Optional model used to generate summary from `llm-prompt`. | false | `""` |
| `llm-api-key` | Optional API key for the selected LLM provider. | false | `""` |
| `llm-base-url` | Optional base URL (used for `openai` provider). | false | `https://api.openai.com/v1` |
| `llm-summary-command` | Optional command that reads `llm-prompt` on stdin and returns a summary on stdout. | false | `""` |
| `markdown-template` | Markdown template with placeholders (`base_ref`, `changes`, …) | false | Built-in template |

## Outputs

| Output | Description |
| -------------- | --------------------------------------------------- |
| `changelog` | Rendered Markdown changelog. |
| `changes` | Compiled Markdown list of changes. |
| `llm-prompt` | Prompt text that can be sent to any LLM provider. |
| `commit-count` | Number of commits included in the changelog output. |
121 changes: 121 additions & 0 deletions actions/release/summarize-changelog/action.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,121 @@
name: "Release - Summarize changelog"
description: "Compile release changelog entries between two refs, with optional conventional commit grouping and LLM summary injection."
author: hoverkraft
branding:
icon: file-text
color: blue

inputs:
base-ref:
description: "Base git ref (excluded from the range)."
required: true
head-ref:
description: "Head git ref (included in the range)."
required: true
conventional-commits:
description: "Whether to group commit messages by conventional commit type."
required: false
default: "true"
llm-summary:
description: "Optional summary generated by any LLM provider."
required: false
default: ""
llm-model:
description: "Optional model used to generate summary from `llm-prompt`."
required: false
default: ""
llm-provider:
description: "LLM provider used with LangChain (`openai`, `anthropic`, `google-genai`)."
required: false
default: "openai"
llm-api-key:
description: "Optional API key for the selected LLM provider."
required: false
default: ""
llm-base-url:
description: "Optional base URL (used for `openai` provider)."
required: false
default: "https://api.openai.com/v1"
llm-summary-command:
description: "Optional command used to generate the summary from `llm-prompt` (prompt is sent through stdin)."
required: false
default: ""
markdown-template:
description: |
Markdown template used to build the final changelog.
Supported placeholders:
- {{base_ref}}
- {{head_ref}}
- {{commit_count}}
- {{summary}}
- {{changes}}
required: false
default: |
## Changelog

_Changes from `{{base_ref}}` to `{{head_ref}}`._

{{summary}}

{{changes}}

outputs:
changelog:
description: "Rendered markdown changelog."
value: ${{ steps.summarize.outputs.changelog }}
changes:
description: "Compiled markdown list of changes."
value: ${{ steps.summarize.outputs.changes }}
llm-prompt:
description: "Prompt text that can be sent to any LLM provider."
value: ${{ steps.summarize.outputs.llm-prompt }}
commit-count:
description: "Number of commits included in the changelog."
value: ${{ steps.summarize.outputs.commit-count }}

runs:
using: "composite"
steps:
- uses: hoverkraft-tech/ci-github-common/actions/checkout@f24ce3360a8abf9bf386a62ab13d0ae5de5f9d13 # 0.31.7
with:
fetch-depth: "0"

- shell: bash
if: ${{ !inputs.llm-summary && inputs.llm-model != '' }}
env:
LLM_PROVIDER: ${{ inputs.llm-provider }}
run: |
provider="${LLM_PROVIDER:-openai}"
case "$provider" in
openai)
npm install --no-save langchain@1.2.25 @langchain/openai@1.2.8
;;
anthropic)
npm install --no-save langchain@1.2.25 @langchain/anthropic@1.3.18
;;
google-genai)
npm install --no-save langchain@1.2.25 @langchain/google-genai@2.1.19
;;
*)
echo "Unsupported llm-provider: $provider. Supported values: openai, anthropic, google-genai"
exit 1
;;
esac

- id: summarize
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0
env:
BASE_REF: ${{ inputs.base-ref }}
HEAD_REF: ${{ inputs.head-ref }}
CONVENTIONAL_COMMITS: ${{ inputs.conventional-commits }}
LLM_SUMMARY: ${{ inputs.llm-summary }}
LLM_MODEL: ${{ inputs.llm-model }}
LLM_PROVIDER: ${{ inputs.llm-provider }}
LLM_API_KEY: ${{ inputs.llm-api-key }}
LLM_BASE_URL: ${{ inputs.llm-base-url }}
LLM_SUMMARY_COMMAND: ${{ inputs.llm-summary-command }}
MARKDOWN_TEMPLATE: ${{ inputs.markdown-template }}
with:
script: |
const summarize = require(`${process.env.GITHUB_ACTION_PATH}/summarize.js`);
await summarize({ core });
145 changes: 145 additions & 0 deletions actions/release/summarize-changelog/summarize.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,145 @@
const { execSync } = require("node:child_process");
const { initChatModel } = require("langchain/chat_models/universal");

module.exports = async ({ core }) => {
const baseRef = process.env.BASE_REF.trim();
const headRef = process.env.HEAD_REF.trim();
const useConventionalCommits =
process.env.CONVENTIONAL_COMMITS.trim() === "true";
const llmSummaryInput = process.env.LLM_SUMMARY.trim();
const llmModel = process.env.LLM_MODEL.trim();
const llmProvider = process.env.LLM_PROVIDER.trim() || "openai";
const llmApiKey = process.env.LLM_API_KEY.trim();
const llmBaseUrl = process.env.LLM_BASE_URL.trim();
const llmSummaryCommand = process.env.LLM_SUMMARY_COMMAND.trim();
const markdownTemplate = process.env.MARKDOWN_TEMPLATE;

if (!baseRef || !headRef) {
core.setFailed("Both base-ref and head-ref inputs are required.");
return;
}

const rawLog = execSync(
`git log --no-merges --pretty=format:%s ${baseRef}..${headRef}`,
{
encoding: "utf8",
},
).trim();

const commits = rawLog
? rawLog
.split("\n")
.map((line) => line.trim())
.filter(Boolean)
: [];
const groupedCommits = commits.reduce((acc, subject) => {
const match = subject.match(
/^(?<type>[a-z]+)(?:\([^)]+\))?(?:!)?:\s+(?<description>.+)$/i,
);
const rawType = match?.groups?.type?.toLowerCase();
const description = match?.groups?.description || subject;

let section = "Other changes";
if (useConventionalCommits && rawType) {
const typeToTitle = {
feat: "Features",
fix: "Bug fixes",
perf: "Performance",
refactor: "Refactors",
docs: "Documentation",
test: "Tests",
build: "Build",
ci: "CI",
chore: "Chores",
style: "Style",
revert: "Reverts",
};
section = typeToTitle[rawType] || "Other changes";
} else if (!useConventionalCommits) {
section = "Changes";
}

if (!acc.has(section)) {
acc.set(section, []);
}

acc.get(section).push(useConventionalCommits ? description : subject);
return acc;
}, new Map());

const changes = [...groupedCommits.entries()]
.map(
([section, entries]) =>
`### ${section}\n${entries.map((entry) => `- ${entry}`).join("\n")}`,
)
.join("\n\n")
.trim();

const llmPrompt = [
"Summarize the following release changes as markdown:",
`Base ref: ${baseRef}`,
`Head ref: ${headRef}`,
"",
changes || "- No user-facing changes found.",
].join("\n");

let llmSummary = llmSummaryInput;
if (!llmSummary && llmModel) {
if (!llmApiKey) {
core.setFailed("llm-api-key is required when llm-model is provided.");
return;
}

const llmConfig = {
model: llmModel,
modelProvider: llmProvider,
};
if (llmProvider === "openai") {
llmConfig.apiKey = llmApiKey;
if (llmBaseUrl) {
llmConfig.configuration = { baseURL: llmBaseUrl };
}
} else if (llmProvider === "anthropic") {
llmConfig.apiKey = llmApiKey;
} else if (llmProvider === "google-genai") {
llmConfig.apiKey = llmApiKey;
} else {
core.setFailed(
"Unsupported llm-provider. Supported values: openai, anthropic, google-genai.",
);
return;
}

const llm = await initChatModel(undefined, llmConfig);
const response = await llm.invoke([
{
role: "system",
content: "You generate concise release summaries in Markdown.",
},
{ role: "user", content: llmPrompt },
]);
llmSummary =
typeof response?.content === "string" ? response.content.trim() : "";
}

if (!llmSummary && llmSummaryCommand) {
llmSummary = execSync(llmSummaryCommand, {
encoding: "utf8",
input: llmPrompt,
}).trim();
}

const summary = llmSummary ? `### Summary\n${llmSummary}` : "";
const changelog = markdownTemplate
.replace(/\{\{base_ref\}\}/g, baseRef)
.replace(/\{\{head_ref\}\}/g, headRef)
.replace(/\{\{commit_count\}\}/g, `${commits.length}`)
.replace(/\{\{summary\}\}/g, summary)
.replace(/\{\{changes\}\}/g, changes)
.trim();

core.setOutput("commit-count", `${commits.length}`);
core.setOutput("changes", changes);
core.setOutput("llm-prompt", llmPrompt);
core.setOutput("changelog", changelog);
};