Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
297 changes: 1 addition & 296 deletions TOC.md

Large diffs are not rendered by default.

File renamed without changes.
2 changes: 1 addition & 1 deletion develop/dev-guide-connection-parameters.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ This document describes how to configure connection pools and connection paramet

<CustomContent platform="tidb">

If you are interested in more tips about Java application development, see [Best Practices for Developing Java Applications with TiDB](/best-practices/java-app-best-practices.md#connection-pool)
If you are interested in more tips about Java application development, see [Best Practices for Developing Java Applications with TiDB](/develop/java-app-best-practices.md#connection-pool)

</CustomContent>

Expand Down
2 changes: 1 addition & 1 deletion develop/dev-guide-optimize-sql-best-practices.md
Original file line number Diff line number Diff line change
Expand Up @@ -167,7 +167,7 @@ For how to locate and resolve transaction conflicts, see [Troubleshoot Lock Conf

<CustomContent platform="tidb">

See [Best Practices for Developing Java Applications with TiDB](/best-practices/java-app-best-practices.md).
See [Best Practices for Developing Java Applications with TiDB](/develop/java-app-best-practices.md).

</CustomContent>

Expand Down
2 changes: 1 addition & 1 deletion faq/sql-faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -241,7 +241,7 @@ SELECT column_name FROM table_name USE INDEX(index_name)WHERE where_conditio

## DDL Execution

This section lists issues related to DDL statement execution. For detailed explanations on the DDL execution principles, see [Execution Principles and Best Practices of DDL Statements](/ddl-introduction.md).
This section lists issues related to DDL statement execution. For detailed explanations on the DDL execution principles, see [Execution Principles and Best Practices of DDL Statements](/best-practices/ddl-introduction.md).

### How long does it take to perform various DDL operations?

Expand Down
2 changes: 1 addition & 1 deletion performance-tuning-practices.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ This document describes how to use these features together to analyze and compar
>
> [Top SQL](/dashboard/top-sql.md) and [Continuous Profiling](/dashboard/continuous-profiling.md) are not enabled by default. You need to enable them in advance.

By running the same application with different JDBC configurations in these scenarios, this document shows you how the overall system performance is affected by different interactions between applications and databases, so that you can apply [Best Practices for Developing Java Applications with TiDB](/best-practices/java-app-best-practices.md) for better performance.
By running the same application with different JDBC configurations in these scenarios, this document shows you how the overall system performance is affected by different interactions between applications and databases, so that you can apply [Best Practices for Developing Java Applications with TiDB](/develop/java-app-best-practices.md) for better performance.

## Environment description

Expand Down
2 changes: 1 addition & 1 deletion sql-statements/sql-statement-admin-show-ddl.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ The `ADMIN SHOW DDL JOBS` statement is used to view all the results in the curre
- `txn-merge`: Transactional backfill with a temporary index that gets merged with the original index when the backfill is finished.
- `SCHEMA_STATE`: the current state of the schema object that the DDL operates on. If `JOB_TYPE` is `ADD INDEX`, it is the state of the index; if `JOB_TYPE` is `ADD COLUMN`, it is the state of the column; if `JOB_TYPE` is `CREATE TABLE`, it is the state of the table. Common states include the following:
- `none`: indicates that it does not exist. Generally, after the `DROP` operation or after the `CREATE` operation fails and rolls back, it will become the `none` state.
- `delete only`, `write only`, `delete reorganization`, `write reorganization`: these four states are intermediate states. For their specific meanings, see [How the Online DDL Asynchronous Change Works in TiDB](/ddl-introduction.md#how-the-online-ddl-asynchronous-change-works-in-tidb). As the intermediate state conversion is fast, these states are generally not visible during operation. Only when performing `ADD INDEX` operation can the `write reorganization` state be seen, indicating that index data is being added.
- `delete only`, `write only`, `delete reorganization`, `write reorganization`: these four states are intermediate states. For their specific meanings, see [How the Online DDL Asynchronous Change Works in TiDB](/best-practices/ddl-introduction.md#how-the-online-ddl-asynchronous-change-works-in-tidb). As the intermediate state conversion is fast, these states are generally not visible during operation. Only when performing `ADD INDEX` operation can the `write reorganization` state be seen, indicating that index data is being added.
- `public`: indicates that it exists and is available to users. Generally, after `CREATE TABLE` and `ADD INDEX` (or `ADD COLUMN`) operations are completed, it will become the `public` state, indicating that the newly created table, column, and index can be read and written normally.
- `SCHEMA_ID`: the ID of the database where the DDL operation is performed.
- `TABLE_ID`: the ID of the table where the DDL operation is performed.
Expand Down
10 changes: 10 additions & 0 deletions system-variables.md
Original file line number Diff line number Diff line change
Expand Up @@ -1792,6 +1792,16 @@ MPP is a distributed computing framework provided by the TiFlash engine, which a
### tidb_enable_pseudo_for_outdated_stats <span class="version-mark">New in v5.3.0</span>

- Scope: SESSION | GLOBAL
### `tidb_opt_selectivity_factor` <span class="version-mark">Introduced in v9.0.0</span>

- Scope: SESSION | GLOBAL
- Is persisted to the cluster: Yes
- Is controlled by Hint [SET_VAR](/optimizer-hints.md#set_varvar_namevar_value): Yes
- Type: Floating-point number
- Value range: `[0, 1]`
- Default value: `0.8`
- This variable specifies the default selectivity factor for the TiDB optimizer. In some cases, when the optimizer cannot derive the predicate selectivity based on statistics, the optimizer uses this default selectivity as a substitute. **It is not recommended** to modify this value.

- Persists to cluster: Yes
- Type: Boolean
- Default value: `OFF`
Comment on lines +1795 to 1807
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

There seems to be a structural issue here. The documentation for the new system variable tidb_opt_selectivity_factor has been inserted in the middle of the properties list for tidb_enable_pseudo_for_outdated_stats. This breaks the document's structure and readability.

The new section for tidb_opt_selectivity_factor should be placed after the entire section for tidb_enable_pseudo_for_outdated_stats is complete. The properties for tidb_enable_pseudo_for_outdated_stats should be grouped together, followed by its description.

Also, this change seems unrelated to the main purpose of this pull request. It might be better to move this change to a separate PR.

Expand Down
2 changes: 1 addition & 1 deletion telemetry.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ When the telemetry is enabled, TiDB, TiUP and TiDB Dashboard collect usage infor

## What is shared?

The following sections describe the shared usage information in detail for each component. The usage details that get shared might change over time. These changes (if any) will be announced in [release notes](/releases/release-notes.md).
The following sections describe the shared usage information in detail for each component. The usage details that get shared might change over time. These changes (if any) will be announced in [release notes](https://docs.pingcap.com/releases/tidb-self-managed/).

> **Note:**
>
Expand Down
6 changes: 3 additions & 3 deletions upgrade-tidb-using-tiup.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ This document is targeted for the following upgrade paths:
4. Upgrade the cluster to v6.5.12 according to this document.
- Support upgrading the versions of TiDB Binlog, TiCDC, TiFlash, and other components.
- When upgrading TiFlash from versions earlier than v6.3.0 to v6.3.0 and later versions, note that the CPU must support the AVX2 instruction set under the Linux AMD64 architecture and the ARMv8 instruction set architecture under the Linux ARM64 architecture. For details, see the description in [v6.3.0 Release Notes](/releases/release-6.3.0.md#others).
- For detailed compatibility changes of different versions, see the [Release Notes](/releases/release-notes.md) of each version. Modify your cluster configuration according to the "Compatibility Changes" section of the corresponding release notes.
- For detailed compatibility changes of different versions, see the [Release Notes](https://docs.pingcap.com/releases/tidb-self-managed/) of each version. Modify your cluster configuration according to the "Compatibility Changes" section of the corresponding release notes.
- For clusters that upgrade from versions earlier than v5.3 to v5.3 or later versions, the default deployed Prometheus will upgrade from v2.8.1 to v2.27.1. Prometheus v2.27.1 provides more features and fixes a security issue. Compared with v2.8.1, alert time representation in v2.27.1 is changed. For more details, see [Prometheus commit](https://github.com/prometheus/prometheus/commit/7646cbca328278585be15fa615e22f2a50b47d06) for more details.

## Preparations
Expand All @@ -50,7 +50,7 @@ This section introduces the preparation works needed before upgrading your TiDB

Review compatibility changes in TiDB release notes. If any changes affect your upgrade, take actions accordingly.

This following provides compatibility changes you need to know when you upgrade from v6.4.0 to the current version (v6.5.12). If you are upgrading from v6.3.0 or earlier versions to the current version, you might also need to check the compatibility changes introduced in intermediate versions in the corresponding [release notes](/releases/release-notes.md).
This following provides compatibility changes you need to know when you upgrade from v6.4.0 to the current version (v6.5.12). If you are upgrading from v6.3.0 or earlier versions to the current version, you might also need to check the compatibility changes introduced in intermediate versions in the corresponding [release notes](https://docs.pingcap.com/releases/tidb-self-managed/).

- TiDB v6.5.0 [compatibility changes](/releases/release-6.5.0.md#compatibility-changes) and [deprecated features](/releases/release-6.5.0.md#deprecated-feature)
- TiDB v6.5.1 [compatibility changes](/releases/release-6.5.1.md#compatibility-changes)
Expand Down Expand Up @@ -292,7 +292,7 @@ Re-execute the `tiup cluster upgrade` command to resume the upgrade. The upgrade

### How to fix the issue that the upgrade gets stuck when upgrading to v6.2.0 or later versions?

Starting from v6.2.0, TiDB enables the [concurrent DDL framework](/ddl-introduction.md#how-the-online-ddl-asynchronous-change-works-in-tidb) by default to execute concurrent DDLs. This framework changes the DDL job storage from a KV queue to a table queue. This change might cause the upgrade to get stuck in some scenarios. The following are some scenarios that might trigger this issue and the corresponding solutions:
Starting from v6.2.0, TiDB enables the [concurrent DDL framework](/best-practices/ddl-introduction.md#how-the-online-ddl-asynchronous-change-works-in-tidb) by default to execute concurrent DDLs. This framework changes the DDL job storage from a KV queue to a table queue. This change might cause the upgrade to get stuck in some scenarios. The following are some scenarios that might trigger this issue and the corresponding solutions:

- Upgrade gets stuck due to plugin loading

Expand Down