Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
28 commits
Select commit Hold shift + click to select a range
85ea926
Revert "InfluxDB 1.12.3 release (#6872)"
jstirnaman Mar 14, 2026
69f06d3
fix(v1): split v1.12.3 release — publish OSS, defer Enterprise pendin…
jstirnaman Mar 14, 2026
5053dd6
chore: document separate OSS/Enterprise PR workflow for v1 releases
jstirnaman Mar 14, 2026
c497f13
fix(enterprise): restore FUTURE/PAST LIMIT grammar productions in Inf…
jstirnaman Mar 14, 2026
3f6b969
Revert "fix(enterprise): restore FUTURE/PAST LIMIT grammar production…
jstirnaman Mar 14, 2026
63fcdb7
fix(v1): remove incorrect FUTURE/PAST LIMIT ordering caution from OSS…
jstirnaman Mar 14, 2026
ec0d76d
fix(enterprise): correct FUTURE/PAST LIMIT documentation for v1.12.2 …
jstirnaman Mar 14, 2026
c3c6e4a
style(enterprise): clean up InfluxQL spec formatting and fix broken l…
jstirnaman Mar 14, 2026
b3b63a0
feat(enterprise): InfluxDB Enterprise v1.12.3 release documentation
jstirnaman Mar 14, 2026
48acce4
fix(enterprise): correct FUTURE/PAST LIMIT documentation for v1.12.2 …
jstirnaman Mar 14, 2026
f05f9e8
style(enterprise): clean up InfluxQL spec and fix broken config links…
jstirnaman Mar 14, 2026
20c206e
Merge branch 'worktree-jts-fix-unpublish-ent-1.12.3' into jts-enterpr…
jstirnaman Mar 31, 2026
69633f9
Merge branch 'master' into jts-enterprise-v1.12.3-release
jstirnaman Mar 31, 2026
f556289
feat(enterprise): document influxd-ctl backup improvements (#7021)
jstirnaman Mar 31, 2026
554ae78
feat(v1): document time_format query parameter (#7011)
jstirnaman Mar 31, 2026
7343566
feat(v1): document /debug/vars config and CQ statistics (#7013)
jstirnaman Mar 31, 2026
749f01c
feat(enterprise): document -e flag on influxd-ctl show-shards (#7014)
jstirnaman Mar 31, 2026
dafb85e
feat(enterprise): document -timeout global flag for influxd-ctl (#7015)
jstirnaman Mar 31, 2026
5f6b4c5
feat(v1): document user column in SHOW QUERIES output (#7017)
jstirnaman Mar 31, 2026
e154c97
feat(enterprise): document SIGHUP log level reload (#7018)
jstirnaman Mar 31, 2026
5d4b504
feat(v1): document user-query-bytes-enabled config option (#7019)
jstirnaman Mar 31, 2026
e72e602
feat(enterprise): document rpc-resettable-*-timeout config options (#…
jstirnaman Mar 31, 2026
968a53e
feat(enterprise): add v1.12.3 release notes and bump product version
jstirnaman Mar 31, 2026
2047666
chore(enterprise): update enterprise 1.12.3 version and binary naming…
sanderson Mar 31, 2026
2b8b758
chore(enterprise): add ent 1.12.3 release summary, update release not…
sanderson Mar 31, 2026
b5c9e9e
Clarify speed improvements in release notes
sanderson Mar 31, 2026
f623c1c
fix(enterprise): fix broken fragment links across v1 docs
jstirnaman Apr 1, 2026
8985701
fix(enterprise): fix remaining broken links from CI run #23826485679
jstirnaman Apr 1, 2026
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
581 changes: 574 additions & 7 deletions PLAN.md

Large diffs are not rendered by default.

147 changes: 125 additions & 22 deletions content/enterprise_influxdb/v1/about-the-project/release-notes.md

Large diffs are not rendered by default.

Original file line number Diff line number Diff line change
Expand Up @@ -59,14 +59,14 @@
- [Exporting and importing data](#exporting-and-importing-data)
- [Exporting data](#exporting-data)
- [Importing data](#importing-data)
- [Example](#example)
- [Example](#example-export-and-import-for-disaster-recovery)

### Backup utility

A backup creates a copy of the [metastore](/enterprise_influxdb/v1/concepts/glossary/#metastore) and [shard](/enterprise_influxdb/v1/concepts/glossary/#shard) data at that point in time and stores the copy in the specified directory.

To back up **only the cluster metastore**, use the `-strategy only-meta` backup option.
For more information, see how to [perform a metastore only backup](#perform-a-metastore-only-backup).
For more information, see how to [perform a metadata only backup](#perform-a-metadata-only-backup).

All backups include a manifest, a JSON file describing what was collected during the backup.
The filenames reflect the UTC timestamp of when the backup was created, for example:
Expand Down Expand Up @@ -263,7 +263,7 @@
##### Restore a backup

Restore a backup to an existing cluster or a new cluster.
By default, a restore writes to databases using the backed-up data's [replication factor](/enterprise_influxdb/v1/concepts/glossary/#replication-factor).
By default, a restore writes to databases using the backed-up data's [replication factor](/enterprise_influxdb/v1/concepts/glossary/#replication-factor-rf).
An alternate replication factor can be specified with the `-newrf` flag when restoring a single database.
Restore supports both `-full` backups and incremental backups; the syntax for
a restore differs depending on the backup type.
Expand Down Expand Up @@ -501,7 +501,7 @@

InfluxDB Enterprise introduced incremental backups in version 1.2.0.
To restore a backup created prior to version 1.2.0, be sure to follow the syntax
for [restoring from a full backup](#restore-from-a-full-backup).
for [restoring from a `-full` backup](#restore-from-a--full-backup).

## Exporting and importing data

Expand Down Expand Up @@ -575,8 +575,8 @@

For an example of using the exporting and importing data approach for disaster recovery, see the presentation from Influxdays 2019 on ["Architecting for Disaster Recovery."](https://www.youtube.com/watch?v=LyQDhSdnm4A). In this presentation, Capital One discusses the following:

- Exporting data every 15 minutes from an active InfluxDB Enterprise cluster to an AWS S3 bucket.

Check notice on line 578 in content/enterprise_influxdb/v1/administration/backup-and-restore.md

View workflow job for this annotation

GitHub Actions / Vale style check

InfluxDataDocs.Acronyms

Spell out 'AWS', if it's unfamiliar to the audience.
- Replicating the export file in the S3 bucket using the AWS S3 copy command.

Check notice on line 579 in content/enterprise_influxdb/v1/administration/backup-and-restore.md

View workflow job for this annotation

GitHub Actions / Vale style check

InfluxDataDocs.Acronyms

Spell out 'AWS', if it's unfamiliar to the audience.
- Importing data every 15 minutes from the AWS S3 bucket to an InfluxDB Enterprise cluster available for disaster recovery.

Check notice on line 580 in content/enterprise_influxdb/v1/administration/backup-and-restore.md

View workflow job for this annotation

GitHub Actions / Vale style check

InfluxDataDocs.Acronyms

Spell out 'AWS', if it's unfamiliar to the audience.
- Advantages of the export-import approach over the standard backup and restore utilities for large volumes of data.
- Managing users and scheduled exports and imports with a custom administration tool.
Original file line number Diff line number Diff line change
Expand Up @@ -159,7 +159,7 @@
The Anti-Entropy service does its best to avoid hot shards (shards that are currently receiving writes)
because they change quickly.
While write replication between shard owner nodes (with a
[replication factor](/enterprise_influxdb/v1/concepts/glossary/#replication-factor)
[replication factor](/enterprise_influxdb/v1/concepts/glossary/#replication-factor-rf)
greater than 1) typically happens in milliseconds, this slight difference is
still enough to cause the appearance of entropy where there is none.

Expand Down Expand Up @@ -346,4 +346,4 @@

#### `Skipped shards`

Indicates that the Anti-Entropy process has skipped a status check on shards because they are currently [hot](#hot-shards).

Check notice on line 349 in content/enterprise_influxdb/v1/administration/configure/anti-entropy/_index.md

View workflow job for this annotation

GitHub Actions / Vale style check

write-good.E-Prime

Try to avoid using 'are'.

Check notice on line 349 in content/enterprise_influxdb/v1/administration/configure/anti-entropy/_index.md

View workflow job for this annotation

GitHub Actions / Vale style check

Google.Contractions

Use 'they're' instead of 'they are'.
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@
- [Graphite `[graphite]`](#graphite-settings)
- [Collectd `[collectd]`](#collectd-settings)
- [OpenTSDB `[opentsdb]`](#opentsdb-settings)
- [UDP `[udp]`](#udp-settings)

Check warning on line 31 in content/enterprise_influxdb/v1/administration/configure/config-data-nodes.md

View workflow job for this annotation

GitHub Actions / Vale style check

InfluxDataDocs.Spelling

Did you really mean 'udp'?
- [Continuous queries `[continuous-queries]`](#continuous-queries-settings)
- [TLS `[tls]`](#tls-settings)
- [Flux Query controls `[flux-controller]`](#flux-controller)
Expand All @@ -52,7 +52,7 @@

#### reporting-disabled

Default is `false`.

Check notice on line 55 in content/enterprise_influxdb/v1/administration/configure/config-data-nodes.md

View workflow job for this annotation

GitHub Actions / Vale style check

write-good.E-Prime

Try to avoid using 'is'.

Once every 24 hours InfluxDB Enterprise will report usage data to usage.influxdata.com.
The data includes a random ID, os, arch, version, the number of series and other usage data. No data from user databases is ever transmitted.
Expand Down Expand Up @@ -598,6 +598,26 @@

Environment variable: `INFLUXDB_CLUSTER_SHARD_READER_TIMEOUT`

#### rpc-resettable-read-timeout {metadata="v1.12.3+"}

Default is `"15m"`.

Read inactivity timeout for incoming RPC connections between data nodes.
The timeout resets on each successful read operation, so it detects stalled connections rather than slow queries.
Set to `"0"` to disable.

Environment variable: `INFLUXDB_CLUSTER_RPC_RESETTABLE_READ_TIMEOUT`

#### rpc-resettable-write-timeout {metadata="v1.12.3+"}

Default is `"15m"`.

Write inactivity timeout for incoming RPC connections between data nodes.
The timeout resets on each successful write operation, so it detects stalled connections rather than slow writes.
Set to `"0"` to disable.

Environment variable: `INFLUXDB_CLUSTER_RPC_RESETTABLE_WRITE_TIMEOUT`

#### https-enabled

Default is `false`.
Expand Down Expand Up @@ -633,6 +653,14 @@

Environment variable: `INFLUXDB_CLUSTER_HTTPS_INSECURE_TLS`

#### https-insecure-certificate {metadata="v1.12.3+"}

Default is `false`.

Skips file permission checking for `https-certificate` and `https-private-key` when `true`.

Environment variable: `INFLUXDB_CLUSTER_HTTPS_INSECURE_CERTIFICATE`

#### cluster-tracing

Default is `false`.
Expand Down Expand Up @@ -1145,6 +1173,17 @@

Environment variable: `INFLUXDB_HTTP_PPROF_AUTH_ENABLED`

#### user-query-bytes-enabled {metadata="v1.12.3+"}

Default is `false`.

Enables per-user query response byte tracking.
When enabled, InfluxDB records the number of bytes returned by queries for each user in the `userquerybytes` measurement, available through `SHOW STATS FOR 'userquerybytes'`, the `_internal` database, and the `/debug/vars` endpoint.

Unauthenticated queries are attributed to `(anonymous)`.

Environment variable: `INFLUXDB_HTTP_USER_QUERY_BYTES_ENABLED`

#### https-enabled

Default is `false`.
Expand All @@ -1171,6 +1210,14 @@

Environment variable: `INFLUXDB_HTTP_HTTPS_PRIVATE_KEY`

#### https-insecure-certificate {metadata="v1.12.3+"}

Default is `false`.

Skips file permission checking for `https-certificate` and `https-private-key` when `true`.

Environment variable: `INFLUXDB_HTTP_HTTPS_INSECURE_CERTIFICATE`

#### shared-secret

Default is `""`.
Expand Down Expand Up @@ -1274,6 +1321,15 @@

Determines which level of logs will be emitted.

To change the log level without restarting the data node, edit the `level` value in the configuration file and send `SIGHUP` to the process:

```bash
kill -SIGHUP <influxd_pid>
```

On receipt of `SIGHUP`, the data node reloads the configuration and applies the new log level.
`SIGHUP` also reloads TLS certificates, entitlements, and the anti-entropy service configuration. _v1.12.3+_

Environment variable: `INFLUXDB_LOGGING_LEVEL`

#### suppress-logo
Expand Down Expand Up @@ -1438,7 +1494,7 @@
## CollectD settings

The `[[collectd]]` settings control the listener for `collectd` data.
For more information, see [CollectD protocol support in InfluxDB](/enterprise_influxdb/v1/supported_protocols/collectd/).

Check warning on line 1497 in content/enterprise_influxdb/v1/administration/configure/config-data-nodes.md

View workflow job for this annotation

GitHub Actions / Vale style check

InfluxDataDocs.Spelling

Did you really mean 'collectd'?

### [[collectd]]

Expand Down Expand Up @@ -1647,7 +1703,7 @@
### Recommended server configuration for "modern compatibility"

InfluxData recommends configuring your InfluxDB server's TLS settings for "modern compatibility" that provides a higher level of security and assumes that backward compatibility is not required.
Our recommended TLS configuration settings for `ciphers`, `min-version`, and `max-version` are based on Mozilla's "modern compatibility" TLS server configuration described in [Security/Server Side TLS](https://wiki.mozilla.org/Security/Server_Side_TLS#Modern_compatibility).
Our recommended TLS configuration settings for `ciphers`, `min-version`, and `max-version` are based on Mozilla's "modern compatibility" TLS server configuration described in [Security/Server Side TLS](https://wiki.mozilla.org/Security/Server_Side_TLS).

InfluxData's recommended TLS settings for "modern compatibility" are specified in the following configuration settings example.

Expand Down Expand Up @@ -1692,6 +1748,14 @@

Environment variable: `INFLUXDB_TLS_MAX_VERSION`

#### advanced-expiration {metadata="v1.12.3+"}

Sets how far in advance to log warnings about TLS certificate expiration.

Default is `"5d"`.

Environment variable: `INFLUXDB_TLS_ADVANCED_EXPIRATION`

## Flux query management settings

### [flux-controller]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -170,6 +170,14 @@ Use either:

Environment variable: `INFLUXDB_META_HTTPS_PRIVATE_KEY`

#### https-insecure-certificate {metadata="v1.12.3+"}

Default is `false`.

Skips file permission checking for `https-certificate` and `https-private-key` when `true`.

Environment variable: `INFLUXDB_META_HTTPS_INSECURE_CERTIFICATE`

#### https-insecure-tls

Default is `false`.
Expand Down Expand Up @@ -341,7 +349,7 @@ The shared secret used by the internal API for JWT authentication for
inter-node communication within the cluster.
Set this to a long pass phrase.
This value must be the same value as the
[`[meta] meta-internal-shared-secret`](/enterprise_influxdb/v1/administration/config-data-nodes#meta-internal-shared-secret) in the data node configuration file.
[`[meta] meta-internal-shared-secret`](/enterprise_influxdb/v1/administration/configure/config-data-nodes/#meta-internal-shared-secret) in the data node configuration file.
To use this option, set [`auth-enabled`](#auth-enabled) to `true`.

Environment variable: `INFLUXDB_META_INTERNAL_SHARED_SECRET`
Expand Down Expand Up @@ -452,7 +460,7 @@ Environment variable: `INFLUXDB_META_ENSURE_FIPS`
Default is `false`.

Require Raft clients to authenticate with server using the
[`meta-internal-shared-secret`](#meta-internal-shared-secret).
[`meta-internal-shared-secret`](#internal-shared-secret).
This requires that all meta nodes are running InfluxDB Enterprise v1.12.0+ and
are configured with the correct `meta-internal-shared-secret`.

Expand All @@ -465,7 +473,7 @@ Environment variable: `INFLUXDB_META_RAFT_PORTAL_AUTH_REQUIRED`
Default is `false`.

Require Raft servers to authenticate Raft clients using the
[`meta-internal-shared-secret`](#meta-internal-shared-secret).
[`meta-internal-shared-secret`](#internal-shared-secret).
This requires that all meta nodes are running InfluxDB Enterprise v1.12.0+, have
`raft-portal-auth-required=true`, and are configured with the correct
`meta-internal-shared-secret`. For existing clusters, it is recommended to enable `raft-portal-auth-required` and restart
Expand All @@ -477,7 +485,7 @@ Environment variable: `INFLUXDB_META_RAFT_DIALER_AUTH_REQUIRED`

### TLS settings

For more information, see [TLS settings for data nodes](/enterprise_influxdb/v1/administration/config-data-nodes#tls-settings).
For more information, see [TLS settings for data nodes](/enterprise_influxdb/v1/administration/configure/config-data-nodes/#tls-settings).

#### Recommended "modern compatibility" cipher settings

Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
title: Rebalance InfluxDB Enterprise v1 clusters

Check warning on line 2 in content/enterprise_influxdb/v1/administration/manage/clusters/rebalance.md

View workflow job for this annotation

GitHub Actions / Vale style check

InfluxDataDocs.Spelling

Did you really mean 'Rebalance'?
description: Manually rebalance an InfluxDB Enterprise v1 cluster.

Check warning on line 3 in content/enterprise_influxdb/v1/administration/manage/clusters/rebalance.md

View workflow job for this annotation

GitHub Actions / Vale style check

InfluxDataDocs.Spelling

Did you really mean 'rebalance'?
aliases:
- /enterprise_influxdb/v1/guides/rebalance/
Expand All @@ -21,7 +21,7 @@
cluster
* Ensure that every
shard is on *n* number of nodes, where *n* is determined by the retention policy's
[replication factor](/enterprise_influxdb/v1/concepts/glossary/#replication-factor)
[replication factor](/enterprise_influxdb/v1/concepts/glossary/#replication-factor-rf)

Rebalancing a cluster is essential for cluster health.
Perform a rebalance if you add a new data node to your cluster.
Expand Down Expand Up @@ -59,7 +59,7 @@

For demonstration purposes, the next steps assume that you added a third
data node to a previously two-data-node cluster that has a
[replication factor](/enterprise_influxdb/v1/concepts/glossary/#replication-factor) of
[replication factor](/enterprise_influxdb/v1/concepts/glossary/#replication-factor-rf) of
two.
This rebalance procedure is applicable for different cluster sizes and
replication factors, but some of the specific, user-provided values will depend
Expand Down Expand Up @@ -258,7 +258,7 @@
22 telegraf autogen 2 [...] 2017-01-26T18:05:36.418734949Z* [{5 enterprise-data-02:8088} {6 enterprise-data-03:8088}]
```

That's it.

Check notice on line 261 in content/enterprise_influxdb/v1/administration/manage/clusters/rebalance.md

View workflow job for this annotation

GitHub Actions / Vale style check

write-good.E-Prime

Try to avoid using 'That's'.
You've successfully rebalanced your cluster; you expanded the available disk
size on the original data nodes and increased the cluster's write throughput.

Expand All @@ -266,7 +266,7 @@

For demonstration purposes, the next steps assume that you added a third
data node to a previously two-data-node cluster that has a
[replication factor](/enterprise_influxdb/v1/concepts/glossary/#replication-factor) of
[replication factor](/enterprise_influxdb/v1/concepts/glossary/#replication-factor-rf) of
two.
This rebalance procedure is applicable for different cluster sizes and
replication factors, but some of the specific, user-provided values will depend
Expand Down Expand Up @@ -439,6 +439,6 @@
000000001-000000001.tsm # 👍
```

That's it.

Check notice on line 442 in content/enterprise_influxdb/v1/administration/manage/clusters/rebalance.md

View workflow job for this annotation

GitHub Actions / Vale style check

write-good.E-Prime

Try to avoid using 'That's'.
You've successfully rebalanced your cluster and increased data availability for
queries and query throughput.
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@
information that the [anti-entropy](/enterprise_influxdb/v1/administration/configure/anti-entropy/)
(AE) process depends on.

**Data nodes** hold raw time-series data and metadata. Data shards are both distributed and replicated across data nodes in the cluster. The AE process runs on data nodes and references the shard information stored in the meta nodes to ensure each data node has the shards they need.

Check notice on line 35 in content/enterprise_influxdb/v1/administration/manage/clusters/replacing-nodes.md

View workflow job for this annotation

GitHub Actions / Vale style check

write-good.E-Prime

Try to avoid using 'are'.

`influxd-ctl` is a CLI included in each meta node and is used to manage your InfluxDB Enterprise cluster.

Expand Down Expand Up @@ -94,7 +94,7 @@
### Replace responsive and unresponsive data nodes in a cluster

The process of replacing both responsive and unresponsive data nodes is the same.
Follow the instructions for [replacing data nodes](#replace-a-data-node-in-an-influxdb-enterprise-cluster).
Follow the instructions for [replacing data nodes](#replace-data-nodes-in-an-influxdb-enterprise-cluster).

### Reconnect a data node with a failed disk

Expand Down Expand Up @@ -269,14 +269,14 @@
#### 2.5. Remove and replace all other non-leader meta nodes

**If replacing only one meta node, no further action is required.**
If replacing others, repeat steps [2.1-2.4](#2-1-provision-a-new-meta-node) for all non-leader meta nodes one at a time.
If replacing others, repeat steps [2.1-2.4](#21-provision-a-new-meta-node) for all non-leader meta nodes one at a time.

### 3. Replace the leader node

As non-leader meta nodes are removed and replaced, the leader node oversees the replication of data to each of the new meta nodes.
Leave the leader up and running until at least two of the new meta nodes are up, running and healthy.

#### 3.1 - Kill the meta process on the leader node
#### 3.1. Kill the meta process on the leader node

Log into the leader meta node and kill the meta process.

Expand All @@ -296,9 +296,9 @@
curl localhost:8091/status | jq
```

#### 3.2 - Remove and replace the old leader node
#### 3.2. Remove and replace the old leader node

Remove the old leader node and replace it by following steps [2.1-2.4](#2-1-provision-a-new-meta-node).
Remove the old leader node and replace it by following steps [2.1-2.4](#21-provision-a-new-meta-node).
The minimum number of meta nodes you should have in your cluster is 3.

## Replace data nodes in an InfluxDB Enterprise cluster
Expand Down Expand Up @@ -369,7 +369,7 @@
6 foo autogen 2 4 2018-03-19T00:00:00Z 2018-03-26T00:00:00Z [{5 enterprise-data-02:8088} {4 enterprise-data-03:8088}]
```

Within the duration defined by [`anti-entropy.check-interval`](/enterprise_influxdb/v1/administration/config-data-nodes#check-interval-10m),
Within the duration defined by [`anti-entropy.check-interval`](/enterprise_influxdb/v1/administration/configure/config-data-nodes/#check-interval),
the AE service begins copying shards from other shard owners to the new node.
The time it takes for copying to complete is determined by the number of shards
copied and how much data is stored in each.
Expand Down
20 changes: 10 additions & 10 deletions content/enterprise_influxdb/v1/administration/upgrading.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,27 +41,27 @@ Complete the following steps to upgrade meta nodes:
##### Ubuntu and Debian (64-bit)

```bash
wget https://dl.influxdata.com/enterprise/releases/influxdb-meta_{{< latest-patch >}}-c{{< latest-patch >}}-1_amd64.deb
wget https://dl.influxdata.com/enterprise/releases/influxdb-meta_{{< latest-patch >}}-c{{< latest-patch >}}_amd64.deb
```

##### RedHat and CentOS (64-bit)

```bash
wget https://dl.influxdata.com/enterprise/releases/influxdb-meta-{{< latest-patch >}}_c{{< latest-patch >}}-1.x86_64.rpm
wget https://dl.influxdata.com/enterprise/releases/influxdb-meta-{{< latest-patch >}}_c{{< latest-patch >}}.x86_64.rpm
```

### Install the meta node package

##### Ubuntu and Debian (64-bit)

```bash
sudo dpkg -i influxdb-meta_{{< latest-patch >}}-c{{< latest-patch >}}-1_amd64.deb
sudo dpkg -i influxdb-meta_{{< latest-patch >}}-c{{< latest-patch >}}_amd64.deb
```

##### RedHat and CentOS (64-bit)

```bash
sudo yum localinstall influxdb-meta-{{< latest-patch >}}-c{{< latest-patch >}}-1.x86_64.rpm
sudo yum localinstall influxdb-meta-{{< latest-patch >}}-c{{< latest-patch >}}.x86_64.rpm
```

### Update the meta node configuration file
Expand Down Expand Up @@ -167,13 +167,13 @@ from other data nodes in the cluster.
##### Ubuntu and Debian (64-bit)

```bash
wget https://dl.influxdata.com/enterprise/releases/influxdb-data_{{< latest-patch >}}-c{{< latest-patch >}}-1_amd64.deb
wget https://dl.influxdata.com/enterprise/releases/influxdb-data_{{< latest-patch >}}-c{{< latest-patch >}}_amd64.deb
```

##### RedHat and CentOS (64-bit)

```bash
wget https://dl.influxdata.com/enterprise/releases/influxdb-data-{{< latest-patch >}}_c{{< latest-patch >}}-1.x86_64.rpm
wget https://dl.influxdata.com/enterprise/releases/influxdb-data-{{< latest-patch >}}_c{{< latest-patch >}}.x86_64.rpm
```

### Install the data node package
Expand All @@ -188,7 +188,7 @@ next procedure, [Update the data node configuration file](#update-the-data-node-
##### Ubuntu & Debian (64-bit)

```bash
sudo dpkg -i influxdb-data_{{< latest-patch >}}-c{{< latest-patch >}}-1_amd64.deb
sudo dpkg -i influxdb-data_{{< latest-patch >}}-c{{< latest-patch >}}_amd64.deb
```

##### RedHat & CentOS (64-bit)
Expand All @@ -207,9 +207,9 @@ Migrate any custom settings from your previous data node configuration file.

| Section | Setting |
| --------| ----------------------------------------------------------|
| `[data]` | <ul><li>To use Time Series Index (TSI) disk-based indexing, add [`index-version = "tsi1"`](/enterprise_influxdb/v1/administration/config-data-nodes#index-version-inmem) <li>To use TSM in-memory index, add [`index-version = "inmem"`](/enterprise_influxdb/v1/administration/config-data-nodes#index-version-inmem) <li>Add [`wal-fsync-delay = "0s"`](/enterprise_influxdb/v1/administration/config-data-nodes#wal-fsync-delay-0s) <li>Add [`max-concurrent-compactions = 0`](/enterprise_influxdb/v1/administration/config-data-nodes#max-concurrent-compactions-0)<li>Set[`cache-max-memory-size`](/enterprise_influxdb/v1/administration/config-data-nodes#cache-max-memory-size-1g) to `1073741824` |
| `[cluster]`| <ul><li>Add [`pool-max-idle-streams = 100`](/enterprise_influxdb/v1/administration/config-data-nodes#pool-max-idle-streams-100) <li>Add[`pool-max-idle-time = "1m0s"`](/enterprise_influxdb/v1/administration/config-data-nodes#pool-max-idle-time-60s) <li>Remove `max-remote-write-connections`
|[`[anti-entropy]`](/enterprise_influxdb/v1/administration/config-data-nodes#anti-entropy)| <ul><li>Add `enabled = true` <li>Add `check-interval = "30s"` <li>Add `max-fetch = 10`|
| `[data]` | <ul><li>To use Time Series Index (TSI) disk-based indexing, add [`index-version = "tsi1"`](/enterprise_influxdb/v1/administration/configure/config-data-nodes/#index-version) <li>To use TSM in-memory index, add [`index-version = "inmem"`](/enterprise_influxdb/v1/administration/configure/config-data-nodes/#index-version) <li>Add [`wal-fsync-delay = "0s"`](/enterprise_influxdb/v1/administration/configure/config-data-nodes/#wal-fsync-delay) <li>Add [`max-concurrent-compactions = 0`](/enterprise_influxdb/v1/administration/configure/config-data-nodes/#max-concurrent-compactions)<li>Set[`cache-max-memory-size`](/enterprise_influxdb/v1/administration/configure/config-data-nodes/#cache-max-memory-size) to `1073741824` |
| `[cluster]`| <ul><li>Add [`pool-max-idle-streams = 100`](/enterprise_influxdb/v1/administration/configure/config-data-nodes/#pool-max-idle-streams) <li>Add[`pool-max-idle-time = "1m0s"`](/enterprise_influxdb/v1/administration/configure/config-data-nodes/#pool-max-idle-time) <li>Remove `max-remote-write-connections`
|[`[anti-entropy]`](/enterprise_influxdb/v1/administration/configure/config-data-nodes/#anti-entropy)| <ul><li>Add `enabled = true` <li>Add `check-interval = "30s"` <li>Add `max-fetch = 10`|
|`[admin]`| Remove entire section.|

For more information about TSI, see [TSI overview](/enterprise_influxdb/v1/concepts/time-series-index/) and [TSI details](/enterprise_influxdb/v1/concepts/tsi-details/).
Expand Down
2 changes: 1 addition & 1 deletion content/enterprise_influxdb/v1/features/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ Certain configurations (e.g., 3 meta and 2 data node) provide high-availability
while making certain tradeoffs in query performance when compared to a single node.

Further increasing the number of nodes can improve performance in both respects.
For example, a cluster with 4 data nodes and a [replication factor](/enterprise_influxdb/v1/concepts/glossary/#replication-factor)
For example, a cluster with 4 data nodes and a [replication factor](/enterprise_influxdb/v1/concepts/glossary/#replication-factor-rf)
of 2 can support a higher volume of write traffic than a single node could.
It can also support a higher *query* workload, as the data is replicated
in two locations. Performance of the queries may be on par with a single
Expand Down
Loading
Loading