Skip to content

ArchiveMetadata is not always working at KIT after switching all setups to 11.2.3 #8080

@ArturAkh

Description

@ArturAkh

Dear dCache developers,

We at KIT have recently switched to dCache 11.2.3 release and started to have a closer look into how this is propagated downstream to our HSM provider script.

Unfortunately, it seems like is not working for all experiments we host.

While it works on our test instance (PPS) and for CMS, ATLAS ArchiveMetadata is not appearing in the logs of the HSM tape script, while being sent by FTS in a similar way as for CMS.

We have followed up the details in a GGUS ticket with ATLAS: https://helpdesk.ggus.eu/#ticket/zoom/1002432

In that process, we have asked ATLAS to perform test transfers with plain curl.

Test from ATLAS DDM ops for PPS KIT instance (test setup)

Command:

date; curl -s -v --cert /tmp/x509up_u$(id -u) --key /tmp/x509up_u$(id -u) --cacert /tmp/x509up_u$(id -u) --capath /etc/grid-security/certificates -L --upload-file /tmp/1G "https://ppsdcache-kit.gridka.de:2880/pnfs/gridka.de/dteam/tape/data/15GB_files/$(uuidgen).file" -H 'ArchiveMetadata: eyJleHBlcmltZW50IjoidGVzdCIsImNhbXBhaWduIjoiVGVzdDI2Iiwic3RyZWFtIjoicmF3In0=' --capath /cvmfs/grid.cern.ch/etc/grid-security/certificates --show-error &> /tmp/aaa

Resulted command downstream in the HSM script log:

/usr/bin/dc2hpss.py put 0000E11E6E4799704C5E805D17FB9E4A1080 /export/f01-152-138-e_wT_shared/data/0000E11E6E4799704C5E805D17FB9E4A1080 -si=size=1073741824;new=true;stored=false;sClass=ch_dteam:DTEAM;cClass=-;hsm=osm;accessLatency=NEARLINE;retentionPolicy=CUSTODIAL;path=/pnfs/gridka.de/dteam/tape/data/15GB_files/9dfcf9c1-f524-4f81-bb9e-669670bb25f1.file;uid=17001;archive_metadata=eyJleHBlcmltZW50IjoidGVzdCIsImNhbXBhaWduIjoiVGVzdDI2Iiwic3RyZWFtIjoicmF3In0=;gid=5900;StoreName=ch_dteam;links=0000DD50BDA832BF4E31A854DBFD3377ED98 9dfcf9c1-f524-4f81-bb9e-669670bb25f1.file;LinkGroupId=3;flag-c=1:c02d0001;store=ch_dteam;group=DTEAM;bfid=<Unknown>;

Test from ATLAS DDM ops on actual ATLAS dCache setup

command:

$ date; curl -s -v --cert /tmp/x509up_u$(id -u) --key /tmp/x509up_u$(id -u) --cacert /tmp/x509up_u$(id -u) --capath /etc/grid-security/certificates -L -H 'ArchiveMetadata: eyJleHBlcmltZW50IjoidGVzdCIsImNhbXBhaWduIjoiVGVzdDI2Iiwic3RyZWFtIjoicmF3In0=' --show-error --upload-file /tmp/1G https://atlasdcache-kit-tape.gridka.de:2880/pnfs/gridka.de/atlas/atlasdatatape/user/test.vokac.can.be.removed.$(uuidgen)

Resulted command downstream in the HSM script log:

/usr/bin/dc2hpss.py put 00004B7EA45EDEDD4C89BFBDA195EC4A87F1 /export/f01-125-106-e_wT_atlas/data/00004B7EA45EDEDD4C89BFBDA195EC4A87F1 -si=size=1073741824;new=true;stored=false;sClass=dc_atlas:ATLAS;cClass=-;hsm=osm;accessLatency=NEARLINE;retentionPolicy=CUSTODIAL;path=/pnfs/gridka.de/atlas/atlasdatatape/user/test.vokac.can.be.removed.c974238c-66d5-47aa-a4f5-68ff0d294769;uid=11001;writeToken=745824;gid=5300;StoreName=dc_atlas;SpaceTokenDescription=ATLASDATATAPE;links=00000356D423703749308B993AF3B68084D0 test.vokac.can.be.removed.c974238c-66d5-47aa-a4f5-68ff0d294769;SpaceToken=745824;LinkGroupId=7;flag-c=1:c02d0001;store=dc_atlas;group=ATLAS;bfid=<Unknown>;

The only difference that I can think of is the usage of writeToken/SpaceToken for the ATLAS case.

Would it be possible for you to follow up on that and have a closer look at how the ArchiveMetadata are propagated down to the HSM provider?

Thanks,

Artur

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions