Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
45 changes: 40 additions & 5 deletions docs/safe-outputs.md
Original file line number Diff line number Diff line change
Expand Up @@ -480,15 +480,26 @@ artifacts instead.

Publishes a workspace file as an Azure DevOps **pipeline artifact** that appears
in the **Artifacts tab** of the build summary page. Uses the ADO build artifacts
REST API (container creation + file upload + artifact association).
REST API in two steps:

1. **Upload bytes** to the agent's own per-build file container (Azure DevOps
creates one container per build and exposes its ID via `BUILD_CONTAINERID`).
2. **Associate** the artifact record (`name = artifact_name`) with the target
build via `POST /{project}/_apis/build/builds/{effective_build_id}/artifacts`.

**Omit `build_id` to target the current pipeline run** — the executor resolves
the build ID from the `BUILD_BUILDID` environment variable automatically. When
`build_id` is provided, the artifact is published to that specific build.
`build_id` is provided, the artifact record is published to that specific build
("cross-build publishing"). The artifact bytes still live in the agent's own
build container; only the record's pointer is associated with the target build.
This means cross-published artifacts share the agent build's retention — if the
agent's build is purged, the cross-referenced artifact stops being downloadable.
Cross-project publishing is not supported (the associate POST uses the current
pipeline's project).

The tool stages the file during Stage 1 (MCP) by copying it into the
safe-outputs directory; Stage 3 reads the staged copy and executes the three-step
REST API flow to create the artifact.
safe-outputs directory; Stage 3 reads the staged copy and executes the two-step
REST flow.

**Agent parameters:**
- `build_id` *(optional)* - Target build ID. Omit to publish to the current pipeline run. Must be positive when specified.
Expand All @@ -504,13 +515,37 @@ safe-outputs:
allowed-artifact-names: [] # Optional — restrict names (suffix `*` = prefix match)
allowed-build-ids: [] # Optional — restrict target builds (skipped when targeting current build)
name-prefix: "" # Optional — prepended to the agent-supplied artifact name
require-unique-names: false # Optional — see "Reusing artifact names" below
max: 3 # Maximum per run (default: 3)
```

**Reusing artifact names within one agent run:**
By default, the same `artifact_name` may be reused across multiple
`upload-pipeline-artifact` calls in one run (e.g. publishing a `TriageSummary`
to many failing builds at once). The executor inserts a short hash suffix
(`{artifact_name}__{6 hex}`) into the **internal container folder name** so
the calls don't silently overwrite each other's bytes in the agent's shared
build container. The hash lives only in internal addressing — it does not
appear in the `record.name` your downstream consumers query for, in the web UI
"Download as zip" filename, or in the contents of files extracted by the
`DownloadBuildArtifacts@1` / `DownloadPipelineArtifact@2` tasks (all of which
strip the container folder prefix).

Set `require-unique-names: true` to use a clean container folder
(`{artifact_name}` only, no suffix) and reject in-run reuse of
`(effective_build_id, artifact_name)` with a clear early error before any HTTP
call. Use this when you guarantee one artifact per name per run and want the
shortest possible internal addressing.

Two records with the same `name` on the **same** target build still collide at
the record level (ADO returns 409 from the associate call) regardless of this
setting; use distinct `artifact_name` values when targeting one build with
multiple uploads.

**Notes:**
- Single-file only; directory uploads are not supported.
- When `build_id` is omitted and `allowed-build-ids` is configured, the allow-list check is skipped — the current build is implicitly trusted.
- Requires `SYSTEM_TEAMPROJECTID` to be available in the execution environment (set automatically by Azure DevOps).
- Requires `BUILD_CONTAINERID`, `BUILD_BUILDID`, and `SYSTEM_TEAMPROJECTID` (all set automatically inside an Azure DevOps pipeline job) and `vso.build_execute` scope on the executor's token (the existing write service connection provides this).

### cache-memory (moved to `tools:`)
Memory is now configured as a first-class tool under `tools: cache-memory:` instead of `safe-outputs: memory:`. See the [Cache Memory section](./tools.md#cache-memory-cache-memory) in `docs/tools.md` for details.
Expand Down
4 changes: 4 additions & 0 deletions src/safeoutputs/create_pr.rs
Original file line number Diff line number Diff line change
Expand Up @@ -2468,6 +2468,10 @@ new file mode 100755
pull_request_id: None,
pull_request_source_branch: None,
pull_request_target_branch: None,
build_container_id: None,
uploaded_pipeline_artifact_keys: std::sync::Arc::new(std::sync::Mutex::new(
std::collections::HashSet::new(),
)),
};
let outcome = result.execute_impl(&ctx).await.unwrap();
assert!(!outcome.success);
Expand Down
45 changes: 44 additions & 1 deletion src/safeoutputs/result.rs
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,8 @@
use rmcp::ErrorData as McpError;
use rmcp::model::ErrorCode;
use serde::Serialize;
use std::collections::HashMap;
use std::collections::{HashMap, HashSet};
use std::sync::{Arc, Mutex};

use crate::sanitize::{SanitizeConfig, SanitizeContent};

Expand Down Expand Up @@ -67,6 +68,11 @@ pub struct ExecutionContext {
// ── ADO build variables (from BUILD_*/SYSTEM_*) ───────────────────────
/// Numeric build ID (`BUILD_BUILDID`)
pub build_id: Option<u64>,
/// Numeric file-container ID for the current build (`BUILD_CONTAINERID`).
/// Azure DevOps pre-creates one container per build at job initialization;
/// all artifacts in the build share this container, differentiated by item path.
/// Required by `upload-pipeline-artifact` to know where to upload bytes.
pub build_container_id: Option<u64>,
/// Human-readable build number (`BUILD_BUILDNUMBER`)
#[allow(dead_code)]
pub build_number: Option<String>,
Expand Down Expand Up @@ -110,6 +116,19 @@ pub struct ExecutionContext {
/// PR target branch (`SYSTEM_PULLREQUEST_TARGETBRANCH`)
#[allow(dead_code)]
pub pull_request_target_branch: Option<String>,

/// Per-run dedupe set for `upload-pipeline-artifact` when the
/// `require-unique-names` config is set. Stores `format!("{}/{}",
/// effective_build_id, final_name)` keys; the executor checks-and-inserts
/// before any HTTP call so a second call with the same target build /
/// artifact name fails fast instead of silently overwriting bytes in
/// the agent's shared file container.
///
/// Wrapped in `Arc<Mutex<…>>` so all calls in one Stage 3 run see the
/// same set even though `ExecutionContext` is shared by reference and
/// the `Clone` semantics need to share state. Each `Default` instance
/// gets its own fresh empty set, which is correct for tests.
pub uploaded_pipeline_artifact_keys: Arc<Mutex<HashSet<String>>>,
}

impl ExecutionContext {
Expand Down Expand Up @@ -182,6 +201,7 @@ impl ExecutionContext {

// Build identification
build_id: env("BUILD_BUILDID").and_then(|s| s.parse().ok()),
build_container_id: env("BUILD_CONTAINERID").and_then(|s| s.parse().ok()),
build_number: env("BUILD_BUILDNUMBER"),
build_reason: env("BUILD_REASON"),
definition_name: env("BUILD_DEFINITIONNAME"),
Expand All @@ -199,6 +219,9 @@ impl ExecutionContext {
pull_request_id: env("SYSTEM_PULLREQUEST_PULLREQUESTID"),
pull_request_source_branch: env("SYSTEM_PULLREQUEST_SOURCEBRANCH"),
pull_request_target_branch: env("SYSTEM_PULLREQUEST_TARGETBRANCH"),

// Per-run state for upload-pipeline-artifact dedupe.
uploaded_pipeline_artifact_keys: Arc::new(Mutex::new(HashSet::new())),
}
}
}
Expand Down Expand Up @@ -729,6 +752,26 @@ mod tests {
assert!(ctx.build_id.is_none());
}

#[test]
fn test_from_env_lookup_build_container_id_parses_numeric() {
let ctx =
ExecutionContext::from_env_lookup(env_from(&[("BUILD_CONTAINERID", "112233")]));
assert_eq!(ctx.build_container_id, Some(112233));
}

#[test]
fn test_from_env_lookup_build_container_id_none_for_non_numeric() {
let ctx =
ExecutionContext::from_env_lookup(env_from(&[("BUILD_CONTAINERID", "not-numeric")]));
assert!(ctx.build_container_id.is_none());
}

#[test]
fn test_from_env_lookup_build_container_id_none_when_unset() {
let ctx = ExecutionContext::from_env_lookup(env_from(&[]));
assert!(ctx.build_container_id.is_none());
}

#[test]
fn test_from_env_lookup_populates_triggered_by_fields() {
let ctx = ExecutionContext::from_env_lookup(env_from(&[
Expand Down
4 changes: 4 additions & 0 deletions src/safeoutputs/upload_build_attachment.rs
Original file line number Diff line number Diff line change
Expand Up @@ -817,6 +817,10 @@ attachment-type: "agent-artifact"
pull_request_id: None,
pull_request_source_branch: None,
pull_request_target_branch: None,
build_container_id: None,
uploaded_pipeline_artifact_keys: std::sync::Arc::new(std::sync::Mutex::new(
std::collections::HashSet::new(),
)),
}
}

Expand Down
Loading