Skip to content

cloud: revise Dedicated import docs for the new wizard UX#22699

Open
alastori wants to merge 12 commits intopingcap:release-8.5from
alastori:cloud/dedicated-import-ux
Open

cloud: revise Dedicated import docs for the new wizard UX#22699
alastori wants to merge 12 commits intopingcap:release-8.5from
alastori:cloud/dedicated-import-ux

Conversation

@alastori
Copy link
Copy Markdown
Collaborator

@alastori alastori commented Apr 7, 2026

Summary

Update the TiDB Cloud Dedicated import docs to match the new 3-step wizard introduced in tidb-cloud-console#4617. Mirrors the Serverless rewrite from #21361.

Files changed:

  • `tidb-cloud/import-csv-files.md` - rewrite the per-provider walkthroughs (S3, GCS, Azure Blob)
  • `tidb-cloud/import-parquet-files.md` - same rewrite for Parquet
  • `tidb-cloud/troubleshoot-import-access-denied-error.md` - replace literal sample IDs with placeholders + add discovery procedure

Key changes (per provider walkthrough):

  • Replace per-provider "Import Data from X" pages with the unified "Import Data from Cloud Storage" page that exposes a Storage Provider dropdown
  • Rename File URI / Folder URI to Source Files URI and document both single-file and folder URI formats
  • Rename Bucket Access to Credentials; for AWS, point users at the Having trouble? Create Role ARN manually expandable to fetch the TiDB Cloud Account ID and TiDB Cloud External ID for their cluster
  • Replace Connect + Destination + Start Import with the new Destination Mapping step that supports automatic mapping via file naming conventions or manual mapping rules with wildcard patterns
  • Document the new Pre-check step (separate from manual scan retries)
  • Preserve the Azure Blob Storage Private Link content from cloud: document Azure Blob Storage Private Link import #22427

Troubleshoot doc fix:

The page previously hard-coded `380838443567` as "the TiDB Cloud Account ID" and a literal hex External ID. These values are environment-specific and per-cluster, so this PR replaces them with `` and `` placeholders and adds a short discovery procedure that points users at the wizard's Add New Role ARN -> Having trouble? Create Role ARN manually expandable to fetch the actual values from the console.

Background

Captured the new wizard end-to-end against three Dedicated clusters (AWS, Azure, GCP) on the staging deploy preview `deploy-preview-4617--staging-tidbcloud.netlify.app`. The full AS-IS report with screenshots, observations, and the doc-update plan is at https://pingcap.feishu.cn/wiki/I5lgwOQnSibElNka3U4cX899nud

Deferrals: Azure Blob and GCS provider walkthroughs were modeled after the Serverless rewrite (#21361) and the existing Azure Private Link content (#22427); they were not exercised end-to-end in this session and would benefit from a follow-up validation pass before the wizard ships to GA.

Test plan

Related

@ti-chi-bot ti-chi-bot bot added missing-translation-status This PR does not have translation status info. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. labels Apr 7, 2026
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request updates the TiDB Cloud documentation for CSV and Parquet file imports to align with recent UI changes, including new storage provider selections and destination mapping workflows. It also improves the troubleshooting guide for AWS IAM access by providing clearer instructions on obtaining environment-specific IDs. The review feedback suggests several refinements to enhance readability and professional tone, such as converting passive voice to active voice, correcting minor grammatical omissions, and standardizing UI terminology.

> - To achieve better performance, it is recommended to limit the size of each compressed file to 100 MiB.
> - The Snappy compressed file must be in the [official Snappy format](https://github.com/google/snappy). Other variants of Snappy compression are not supported.
> - For uncompressed files, if you cannot update the CSV filenames according to the preceding rules in some cases (for example, the CSV file links are also used by your other programs), you can keep the filenames unchanged and use the **Mapping Settings** in [Step 4](#step-4-import-csv-files-to-tidb-cloud) to import your source data to a single target table.
> - For uncompressed files, if you cannot update the CSV filenames according to the preceding rules (for example, the CSV file links are also used by your other programs), you can keep the filenames unchanged and unselect **Use [File naming conventions](/tidb-cloud/naming-conventions-for-data-import.md) for automatic mapping** in the **Destination Mapping** step of [Step 4](#step-4-import-csv-files-to-tidb-cloud) to manually map your source files to a single target table.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

low

The term "deselect" is generally preferred over "unselect" in software documentation when referring to clearing a checkbox or radio button.

Suggested change
> - For uncompressed files, if you cannot update the CSV filenames according to the preceding rules (for example, the CSV file links are also used by your other programs), you can keep the filenames unchanged and unselect **Use [File naming conventions](/tidb-cloud/naming-conventions-for-data-import.md) for automatic mapping** in the **Destination Mapping** step of [Step 4](#step-4-import-csv-files-to-tidb-cloud) to manually map your source files to a single target table.
> - For uncompressed files, if you cannot update the CSV filenames according to the preceding rules (for example, the CSV file links are also used by your other programs), you can keep the filenames unchanged and deselect **Use [File naming conventions](/tidb-cloud/naming-conventions-for-data-import.md) for automatic mapping** in the **Destination Mapping** step of [Step 4](#step-4-import-csv-files-to-tidb-cloud) to manually map your source files to a single target table.
References
  1. Clarity, simplicity, and readability are key aspects of the review. (link)

5. In the **Destination Mapping** section, specify how source files are mapped to target tables.

When importing multiple files, you can use **Advanced Settings** > **Mapping Settings** to customize the mapping of individual target tables to their corresponding CSV files. For each target database and table:
When a directory is specified in **Source Files URI**, the **Use [File naming conventions](/tidb-cloud/naming-conventions-for-data-import.md) for automatic mapping** option is selected by default.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

low

Avoid using the passive voice to make the instructions more direct and clear.

Suggested change
When a directory is specified in **Source Files URI**, the **Use [File naming conventions](/tidb-cloud/naming-conventions-for-data-import.md) for automatic mapping** option is selected by default.
When you specify a directory in **Source Files URI**, TiDB Cloud selects the **Use [File naming conventions](/tidb-cloud/naming-conventions-for-data-import.md) for automatic mapping** option by default.
References
  1. Avoid passive voice overuse. (link)

- `s3://mybucket/myfolder/my-data*.csv`: all CSV files starting with `my-data` (such as `my-data10.csv` and `my-data100.csv`) in `myfolder` will be imported into the same target table.
> **Note:**
>
> When a single file is specified in **Source Files URI**, the **Use [File naming conventions](/tidb-cloud/naming-conventions-for-data-import.md) for automatic mapping** option is not displayed, and TiDB Cloud automatically populates the **Source** field with the file name. In this case, you only need to enter the target database and table for data import.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

low

Avoid using the passive voice to improve readability and directness.

Suggested change
> When a single file is specified in **Source Files URI**, the **Use [File naming conventions](/tidb-cloud/naming-conventions-for-data-import.md) for automatic mapping** option is not displayed, and TiDB Cloud automatically populates the **Source** field with the file name. In this case, you only need to enter the target database and table for data import.
When you specify a single file in **Source Files URI**, TiDB Cloud does not display the **Use [File naming conventions](/tidb-cloud/naming-conventions-for-data-import.md) for automatic mapping** option and automatically populates the **Source** field with the file name. In this case, you only need to enter the target database and table for data import.
References
  1. Avoid passive voice overuse. (link)


- To let TiDB Cloud automatically map all source files that follow the [File naming conventions](/tidb-cloud/naming-conventions-for-data-import.md) to their corresponding tables, keep this option selected and select **CSV** as the data format. If your source folder includes schema files (such as `${db_name}-schema-create.sql` and `${db_name}.${table_name}-schema.sql`), TiDB Cloud uses them to create the target databases and tables when they do not already exist.

- To manually configure the mapping rules to associate your source CSV files with the target database and table, unselect this option, and then fill in the following fields:
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

low

The term "deselect" is preferred over "unselect" for UI actions.

Suggested change
- To manually configure the mapping rules to associate your source CSV files with the target database and table, unselect this option, and then fill in the following fields:
- To manually configure the mapping rules to associate your source CSV files with the target database and table, deselect this option, and then fill in the following fields:
References
  1. Clarity, simplicity, and readability. (link)


- To manually configure the mapping rules to associate your source CSV files with the target database and table, unselect this option, and then fill in the following fields:

- **Source**: enter the file name pattern in the `[file_name].csv` format. For example, `TableName.01.csv`. You can also use wildcards to match multiple files. Only `*` and `?` wildcards are supported.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

low

Avoid passive voice to make the statement more direct.

Suggested change
- **Source**: enter the file name pattern in the `[file_name].csv` format. For example, `TableName.01.csv`. You can also use wildcards to match multiple files. Only `*` and `?` wildcards are supported.
- **Source**: enter the file name pattern in the `[file_name].csv` format. For example, `TableName.01.csv`. You can also use wildcards to match multiple files. TiDB Cloud only supports the `*` and `?` wildcards.
References
  1. Avoid passive voice overuse. (link)

If necessary, click **Edit CSV Configuration** to configure the options according to your CSV files. You can set the separator and delimiter characters, specify whether to use backslashes for escaped characters, and specify whether your files contain a header row.

7. When the import progress shows **Completed**, check the imported tables.
6. Click **Next**. TiDB Cloud scans the source files accordingly.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

low

The word "accordingly" is unnecessary here and can be removed for conciseness.

Suggested change
6. Click **Next**. TiDB Cloud scans the source files accordingly.
6. Click **Next**. TiDB Cloud scans the source files.
References
  1. Avoid unnecessary words and repetition. (link)

@alastori
Copy link
Copy Markdown
Collaborator Author

alastori commented Apr 7, 2026

cc @qiancai @zoubingwu

@qiancai qiancai self-assigned this Apr 7, 2026
@qiancai qiancai added translation/no-need No need to translate this PR. area/tidb-cloud This PR relates to the area of TiDB Cloud. for-cloud-release This PR is related to TiDB Cloud release. labels Apr 7, 2026
@ti-chi-bot ti-chi-bot bot removed the missing-translation-status This PR does not have translation status info. label Apr 7, 2026
Copy link
Copy Markdown
Collaborator

@qiancai qiancai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(comment removed)

> For TiDB Cloud Starter or TiDB Cloud Essential, see [Import Apache Parquet Files from Cloud Storage into TiDB Cloud Starter or Essential](/tidb-cloud/import-parquet-files-serverless.md).

## Limitations

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The note in Step 1 still reads:

If you cannot update the Parquet filenames according to the preceding rules in some cases (for example, the Parquet file links are also used by your other programs), you can keep the filenames unchanged and use the Mapping Settings in Step 4 to import your source data to a single target table.

The CSV doc's equivalent note was updated to reference the new Destination Mapping step. This Parquet note should receive the same update. Suggested replacement:

If you cannot update the Parquet filenames according to the preceding rules (for example, the Parquet file links are also used by your other programs), you can keep the filenames unchanged and deselect Use File naming conventions for automatic mapping in the Destination Mapping substep of Step 4 to manually map your source files to a single target table.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, fixed in the latest commit. The Step 1 Note now mirrors the CSV doc wording.

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Troubleshooting section in the Parquet doc was not updated, but the same section in import-csv-files.md was. Three stale references remain:

  • "Resolve warnings during data import" — still says After clicking **Start Import** and using **Advanced Settings** to make changes. Should match the CSV doc's updated wording: "If the Pre-check step shows a warning…returning to the Destination Mapping step and switching to manual mapping rules."
  • "Zero rows in the imported tables" — still says no data files matched the Bucket URI and using **Advanced Settings** to make changes. Should say source URI and reference the Destination Mapping step, matching the CSV doc.
  • "After resolving these issues, you need to import the data again." — the CSV doc now says "return to the wizard and run the import again". The Parquet doc should match.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch, all three references in the Parquet Troubleshooting section are now updated to match the CSV doc:

  • "Resolve warnings during data import" now references the Pre-check step and switching to manual mapping rules on the Destination Mapping step.
  • "Zero rows in the imported tables" now says "source URI" (not "Bucket URI") and also references the Destination Mapping fallback.
  • "After resolving these issues" now says "return to the wizard and run the import again".


- For CSV files, see **Advanced Settings** > **Mapping Settings** in [Step 4. Import CSV files to TiDB Cloud](/tidb-cloud/import-csv-files.md#step-4-import-csv-files-to-tidb-cloud)
- For Parquet files, see **Advanced Settings** > **Mapping Settings** in [Step 4. Import Parquet files to TiDB Cloud](/tidb-cloud/import-parquet-files.md#step-4-import-parquet-files-to-tidb-cloud)
In the import wizard, on the **Destination Mapping** step, unselect **Use File naming conventions for automatic mapping**, and then fill in the **Source**, **Target Database**, and **Target Table** fields. The **Source** field accepts a file name pattern that supports the `*` and `?` wildcards.
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
In the import wizard, on the **Destination Mapping** step, unselect **Use File naming conventions for automatic mapping**, and then fill in the **Source**, **Target Database**, and **Target Table** fields. The **Source** field accepts a file name pattern that supports the `*` and `?` wildcards.
In the import wizard, on the **Destination Mapping** step, deselect **Use File naming conventions for automatic mapping**, and then fill in the **Source**, **Target Database**, and **Target Table** fields. The **Source** field accepts a file name pattern that supports the `*` and `?` wildcards.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Applied. Also prefixed the label with "TiDB" (Use TiDB file naming conventions for automatic mapping) to match the literal wizard checkbox text. I swept the same fix through import-csv-files.md and import-parquet-files.md so the label reads consistently across all three files (19 occurrences total).

alastori added 6 commits April 7, 2026 16:15
Update tidb-cloud/import-csv-files.md and tidb-cloud/import-parquet-files.md
to match the new 3-step Dedicated import wizard (Connection -> Destination
Mapping -> Pre-check -> Start Import) introduced in tidb-cloud-console#4617:

- Replace per-provider "Import Data from X" pages with the unified
  "Import Data from Cloud Storage" page that exposes a Storage Provider
  dropdown
- Rename "File URI" / "Folder URI" to "Source Files URI" and document
  both single-file and folder URI formats
- Rename "Bucket Access" to "Credentials"; for AWS, point users to the
  "Having trouble? Create Role ARN manually" expandable to fetch the
  TiDB Cloud Account ID and External ID for their cluster
- Replace "Connect" + "Destination" + "Start Import" steps with the new
  "Destination Mapping" step that supports automatic mapping via file
  naming conventions or manual mapping rules with wildcard patterns
- Document the new "Pre-check" step (separate from manual scan retries)
- Preserve the Azure Blob Storage Private Link content from pingcap#22427

Also update tidb-cloud/troubleshoot-import-access-denied-error.md to:

- Replace the literal sample Account ID and External ID values with
  <TiDB-Cloud-Account-ID> and <TiDB-Cloud-External-ID> placeholders
  (these values are environment-specific and per-cluster)
- Add a discovery procedure that points users at the wizard's
  "Add New Role ARN" -> "Having trouble? Create Role ARN manually"
  expandable to fetch the actual values from the console

Mirrors the Serverless rewrite from pingcap#21361 (Cloud: Import ux optimization).

Related: DM-12710, DM-12798, DM-12799
Update two more files referenced from the import walkthroughs to match
the new 3-step Dedicated import wizard:

- tidb-cloud/dedicated-external-storage.md
  Update the discovery procedures for the TiDB Cloud Account ID,
  TiDB Cloud External ID, and Google Cloud Service Account ID so they
  point at the new "Import Data from Cloud Storage" page instead of the
  legacy per-provider pages, and use the new "Having trouble? Create
  Role ARN manually" expandable label.

- tidb-cloud/naming-conventions-for-data-import.md
  Replace the legacy "Advanced Settings > Mapping Settings" reference
  in the file-pattern section with the new "Destination Mapping" step
  and the "Use File naming conventions for automatic mapping" toggle.

Related: DM-12710
The Dedicated import wizard uses the field label "Source URI" while the
Premium and Serverless wizards use "Source Files URI". The doc reflects
the actual Dedicated wizard label. The label divergence between tiers is
tracked separately and the wizard will be aligned in a follow-up.
- Replace "unselect" with "deselect" for the auto-mapping toggle
- Convert passive "X is selected by default" / "is not displayed" to
  active "TiDB Cloud selects X by default" / "does not display X"
- Convert passive "Only X and Y are supported" to active "TiDB Cloud
  only supports X and Y"
- Drop unnecessary "accordingly" from "scans the source files"

Skipped Gemini's "create a new one" article suggestions because the
actual wizard button text is "Click here to create new one with AWS
CloudFormation" (no article); aligning the doc to the button text
takes precedence over the grammar nit. The missing article is tracked
as part of DM-12803.
Match the pattern in tidb-cloud/premium/import-csv-files-premium.md and
tidb-cloud/import-csv-files-serverless.md so users landing on the
Dedicated page can quickly jump to the correct doc for their tier.

- import-csv-files.md: link to both Starter/Essential and Premium CSV
  import docs (Premium has its own CSV import doc).
- import-parquet-files.md: link only to Starter/Essential (Premium has
  no separate Parquet import doc).
Apply Grace's review comments on pingcap#22699:

- import-parquet-files.md Step 1 note: reference the Destination Mapping
  step and use "deselect", matching the CSV doc.
- import-parquet-files.md Troubleshooting: update "Resolve warnings during
  data import" and "Zero rows in the imported tables" to match the CSV doc
  (Pre-check wording, source URI, return-to-wizard).
- naming-conventions-for-data-import.md: change "unselect" to "deselect".

Also prefix the checkbox label with "TiDB" across csv, parquet, and
naming-conventions docs (19 occurrences) to match the literal wizard UI
text "Use TiDB file naming conventions for automatic mapping".
@alastori alastori force-pushed the cloud/dedicated-import-ux branch from 5e3a385 to 983e275 Compare April 7, 2026 20:23
@ti-chi-bot
Copy link
Copy Markdown

ti-chi-bot bot commented Apr 7, 2026

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please ask for approval from qiancai. For more information see the Code Review Process.
Please ensure that each of them provides their approval before proceeding.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

3. Click **Import data from Cloud Storage**.

4. Click **Show Google Cloud Server Account ID**, and then copy the Service Account ID for later use.
4. On the **Import Data from Cloud Storage** page, set **Storage Provider** to **Google Cloud Storage**, and then copy the Google Cloud Service Account ID displayed under **Credentials** for later use.
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
4. On the **Import Data from Cloud Storage** page, set **Storage Provider** to **Google Cloud Storage**, and then copy the Google Cloud Service Account ID displayed under **Credentials** for later use.
4. On the **Import Data from Cloud Storage** page, set **Storage Provider** to **Google Cloud Storage**, and then copy the Google Cloud Service Account ID displayed under **Credential** for later use.

- **Source URI**:
- When importing one file, enter the source file URI in the following format `gs://[bucket_name]/[data_source_folder]/[file_name].csv`. For example, `gs://mybucket/myfolder/TableName.01.csv`.
- When importing multiple files, enter the source folder URI in the following format `gs://[bucket_name]/[data_source_folder]/`. For example, `gs://mybucket/myfolder/`.
- **Credentials**: TiDB Cloud provides a unique Google Cloud Service Account ID on this page (such as `example-service-account@your-project.iam.gserviceaccount.com`). Grant this Service Account ID the necessary IAM permissions (such as `Storage Object Viewer`) on your GCS bucket within your Google Cloud project. For more information, see [Configure GCS access](/tidb-cloud/dedicated-external-storage.md#configure-gcs-access).
Copy link
Copy Markdown
Collaborator

@hfxsd hfxsd Apr 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- **Credentials**: TiDB Cloud provides a unique Google Cloud Service Account ID on this page (such as `example-service-account@your-project.iam.gserviceaccount.com`). Grant this Service Account ID the necessary IAM permissions (such as `Storage Object Viewer`) on your GCS bucket within your Google Cloud project. For more information, see [Configure GCS access](/tidb-cloud/dedicated-external-storage.md#configure-gcs-access).
- **Credential**: TiDB Cloud provides a unique Google Cloud Service Account ID on this page (such as `example-service-account@your-project.iam.gserviceaccount.com`). Grant this Service Account ID the necessary IAM permissions (such as `Storage Object Viewer`) on your GCS bucket within your Google Cloud project. For more information, see [Configure GCS access](/tidb-cloud/dedicated-external-storage.md#configure-gcs-access).

- **Source URI**:
- When importing one file, enter the source file URI in the following format `gs://[bucket_name]/[data_source_folder]/[file_name].parquet`. For example, `gs://mybucket/myfolder/TableName.01.parquet`.
- When importing multiple files, enter the source folder URI in the following format `gs://[bucket_name]/[data_source_folder]/`. For example, `gs://mybucket/myfolder/`.
- **Credentials**: TiDB Cloud provides a unique Google Cloud Service Account ID on this page (such as `example-service-account@your-project.iam.gserviceaccount.com`). Grant this Service Account ID the necessary IAM permissions (such as `Storage Object Viewer`) on your GCS bucket within your Google Cloud project. For more information, see [Configure GCS access](/tidb-cloud/dedicated-external-storage.md#configure-gcs-access).
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- **Credentials**: TiDB Cloud provides a unique Google Cloud Service Account ID on this page (such as `example-service-account@your-project.iam.gserviceaccount.com`). Grant this Service Account ID the necessary IAM permissions (such as `Storage Object Viewer`) on your GCS bucket within your Google Cloud project. For more information, see [Configure GCS access](/tidb-cloud/dedicated-external-storage.md#configure-gcs-access).
- **Credential**: TiDB Cloud provides a unique Google Cloud Service Account ID on this page (such as `example-service-account@your-project.iam.gserviceaccount.com`). Grant this Service Account ID the necessary IAM permissions (such as `Storage Object Viewer`) on your GCS bucket within your Google Cloud project. For more information, see [Configure GCS access](/tidb-cloud/dedicated-external-storage.md#configure-gcs-access).


- **Folder URI**: enter the Azure Blob Storage URI where your source files are located using the format `https://[account_name].blob.core.windows.net/[container_name]/[data_source_folder]/`. The path must end with a `/`. For example, `https://myaccount.blob.core.windows.net/mycontainer/data-ingestion/`.
- **SAS Token**: enter an account SAS token to allow TiDB Cloud to access the source files in your Azure Blob Storage container. If you don't have one yet, you can create it using the provided Azure ARM template by clicking **Click here to create a new one with Azure ARM template** and following the instructions on the screen. Alternatively, you can manually create an account SAS token. For more information, see [Configure Azure Blob Storage access](/tidb-cloud/dedicated-external-storage.md#configure-azure-blob-storage-access).
- **Credentials**: enter an account SAS token to allow TiDB Cloud to access the source files in your Azure Blob Storage container. If you do not have one yet, click **Click here to create a new one with Azure ARM template** and follow the instructions on the screen, or manually create an account SAS token. For more information, see [Configure Azure Blob Storage access](/tidb-cloud/dedicated-external-storage.md#configure-azure-blob-storage-access).
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- **Credentials**: enter an account SAS token to allow TiDB Cloud to access the source files in your Azure Blob Storage container. If you do not have one yet, click **Click here to create a new one with Azure ARM template** and follow the instructions on the screen, or manually create an account SAS token. For more information, see [Configure Azure Blob Storage access](/tidb-cloud/dedicated-external-storage.md#configure-azure-blob-storage-access).
- **SAS Token**: enter an account SAS token to allow TiDB Cloud to access the source files in your Azure Blob Storage container. If you do not have one yet, click **Click here to create a new one with Azure ARM template** and follow the instructions on the screen, or manually create an account SAS token. For more information, see [Configure Azure Blob Storage access](/tidb-cloud/dedicated-external-storage.md#configure-azure-blob-storage-access).

@ti-chi-bot
Copy link
Copy Markdown

ti-chi-bot bot commented Apr 8, 2026

[LGTM Timeline notifier]

Timeline:

  • 2026-04-08 04:00:56.669291868 +0000 UTC m=+928861.874651925: ☑️ agreed by hfxsd.

@ti-chi-bot ti-chi-bot bot added the needs-1-more-lgtm Indicates a PR needs 1 more LGTM. label Apr 8, 2026
@ti-chi-bot
Copy link
Copy Markdown

ti-chi-bot bot commented Apr 8, 2026

@zoubingwu: adding LGTM is restricted to approvers and reviewers in OWNERS files.

Details

In response to this:

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Copy link
Copy Markdown
Member

@zoubingwu zoubingwu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@ti-chi-bot
Copy link
Copy Markdown

ti-chi-bot bot commented Apr 8, 2026

@zoubingwu: adding LGTM is restricted to approvers and reviewers in OWNERS files.

Details

In response to this:

LGTM

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@ti-chi-bot
Copy link
Copy Markdown

ti-chi-bot bot commented Apr 8, 2026

@alastori: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
pull-verify 0594876 link true /test pull-verify

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area/tidb-cloud This PR relates to the area of TiDB Cloud. for-cloud-release This PR is related to TiDB Cloud release. needs-1-more-lgtm Indicates a PR needs 1 more LGTM. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. translation/no-need No need to translate this PR.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants