Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
93 commits
Select commit Hold shift + click to select a range
f72afc8
HBASE-29573: Fully load QuotaCache instead of reading individual rows…
charlesconnell Sep 12, 2025
ffed09d
HBASE-26974 Introduce a LogRollProcedure (#5408)
frostruan Sep 12, 2025
89416ce
HBASE-27355 Separate meta read requests from master and client (#7261)
frostruan Sep 15, 2025
0d1ff8a
HBASE-27157 Potential race condition in WorkerAssigner (#4577)
Jul 6, 2022
d592404
HBASE-29451 Add Docs section describing BucketCache Time based priori…
wchevreuil Sep 15, 2025
1e06bcc
HBASE-29577 Fix NPE from RegionServerRpcQuotaManager when reloading c…
junegunn Sep 15, 2025
3ce997c
HBASE-29590 Use hadoop 3.4.2 as default hadooop3 dependency (#7301)
stoty Sep 16, 2025
c6a0c3b
Modern backup failures can cause backup system to lock up (#7288)
hgromer Sep 16, 2025
7f7b9e6
Revert "Modern backup failures can cause backup system to lock up (#7…
rmdmattingly Sep 16, 2025
0f11bec
HBASE-29448 Modern backup failures can cause backup system to lock up…
rmdmattingly Sep 16, 2025
280e8e8
HBASE-29548 Update ApacheDS to 2.0.0.AM27 and ldap-api to 2.1.7 (#7305)
stoty Sep 18, 2025
e1c17e5
HBASE-29602 Add -Djava.security.manager=allow to JDK18+ surefire JVM …
stoty Sep 18, 2025
620f7a3
HBASE-29601 Handle Junit 5 tests in TestCheckTestClasses (#7311)
stoty Sep 18, 2025
40b1ffc
HBASE-29592 Add hadoop 3.4.2 in client integration tests (#7306)
stoty Sep 18, 2025
8799c13
HBASE-29587 Set Test category for TestSnapshotProcedureEarlyExpiratio…
srinireddy2020 Sep 18, 2025
8adb7bd
HBASE-29610 Add and use String constants for Junit 5 @Tag annotations…
stoty Sep 18, 2025
da7325b
HBASE-29591 Add hadoop 3.4.2 in hadoop check (#7320)
Apache9 Sep 18, 2025
04d48ee
HBASE-29609 Upgrade checkstyle and Maven checkstyle plugin (#7321)
PDavid Sep 18, 2025
42fc87d
HBASE-29608 Add test to make sure we do not have copy paste errors in…
Apache9 Sep 20, 2025
d6e68b1
HBASE-29608 Addendum remove jdk9+ only API calls
Apache9 Sep 20, 2025
fd7a84f
Revert "HBASE-29609 Upgrade checkstyle and Maven checkstyle plugin (#…
PDavid Sep 20, 2025
99b7e6c
HBASE-29612 Remove HBaseTestingUtil.forceChangeTaskLogDir (#7326)
stoty Sep 20, 2025
1cd9f29
HBASE-29576 Replicate HBaseClassTestRule functionality for Junit 5 (#…
Apache9 Sep 22, 2025
57e3d5e
HBASE-29576 Addendum fix typo Jupitor -> Jupiter
Apache9 Sep 22, 2025
0a06e2b
HBASE-29619 Don't use Java 14+ style case statements in RestoreBackup…
stoty Sep 22, 2025
d108b8e
HBASE-29550 Reflection error in TestRSGroupsKillRS with Java 21 (#7327)
stoty Sep 22, 2025
b5cdaab
HBASE-29615 Update Small tests description wrt reuseForks in docs (#7…
stoty Sep 22, 2025
608c1b9
HBASE-28440 Add support for using mapreduce sort in HFileOutputFormat…
hgromer Sep 24, 2025
0960087
HBASE-29623 Blocks for CFs with BlockCache disabled may still get cac…
wchevreuil Sep 25, 2025
67420e3
HBASE-29627 Handle any block cache fetching errors when reading a blo…
wchevreuil Sep 25, 2025
e0cec31
HBASE-29614 Remove static final field modification in tests around Un…
Apache9 Sep 29, 2025
c4f7e66
HBASE-29504 [DOC] Document Namespace Auto-Creation During Restore (#7…
vinayakphegde Sep 29, 2025
2c3b89b
HBASE-29629 Record the quota user name value on metrics for RpcThrott…
sidkhillon Sep 30, 2025
c663fc4
HBASE-29497 Mention HFiles for incremental backups (#7216)
vinayakphegde Oct 1, 2025
a2a70d6
HBASE-29505 [DOC] Document Enhanced Options for Backup Delete Command…
vinayakphegde Oct 1, 2025
82e36a2
HBASE-29631 Fix race condition in IncrementalTableBackupClient when H…
sidkhillon Oct 2, 2025
2d88120
HBASE-29626: Refactor server side scan metrics for Coproc hooks (#7340)
sanjeet006py Oct 3, 2025
df34c65
HBASE-29152 Replace site skin with Reflow2 Maven skin (#7355)
PDavid Oct 7, 2025
d0b9478
HBASE-29636 Implement TimedOutTestsListener for junit 5 (#7352)
Apache9 Oct 7, 2025
be40011
HBASE-29223 Migrate Master Status Jamon page back to JSP (#6875)
PDavid Oct 8, 2025
a63c6b4
HBASE-29647 Restore preWALRestore and postWALRestore coprocessor hook…
stoty Oct 9, 2025
d8b1912
HBASE-29637 Implement ResourceCheckerJUnitListener for junit 5 (#7366)
Apache9 Oct 9, 2025
d1bce57
HBASE-29604 BackupHFileCleaner uses flawed time based check (#7360)
DieterDP-ng Oct 10, 2025
e575525
HBASE-29650 Upgrade tomcat-jasper to 9.0.110 (#7372)
stoty Oct 10, 2025
bab3df9
HBASE-29653 Upgrade os-maven-plugin to 1.7.1 for RISC-V riscv64 suppo…
gong-flying Oct 15, 2025
fafa03c
HBASE-29659 Replace reflow-default-webdeps to fix site build failure …
PDavid Oct 16, 2025
7892207
HBASE-29531 Migrate RegionServer Status Jamon page back to JSP (#7371)
PDavid Oct 16, 2025
dfca61b
HBASE-29663 TimeBasedLimiters should support dynamic configuration re…
rmdmattingly Oct 16, 2025
6d7829a
HBASE-29609 Upgrade checkstyle and Maven checkstyle plugin to support…
PDavid Oct 16, 2025
47f7e1d
HBASE-29680 release-util.sh should not hardcode JAVA_HOME for spotles…
apurtell Oct 22, 2025
a81a5fd
HBASE-29677: Thread safety in QuotaRefresherChore (#7401)
charlesconnell Oct 22, 2025
a79100b
HBASE-29351 Quotas: adaptive wait intervals (#7396)
rmdmattingly Oct 22, 2025
07c2b5b
HBASE-29679: Suppress stack trace in RpcThrottlingException (#7403)
charlesconnell Oct 28, 2025
a47fa6a
HBASE-29461 Alphabetize the list of variables that can be dynamically…
kgeisz Oct 28, 2025
1d5649c
HBASE-29690 Correct typo in TableReplicationQueueStorage.removeAllQue…
droudnitsky Oct 30, 2025
305951e
HBASE-29651 Bump jruby to 9.4.14.0 to fix multiple CVEs (#7405)
xavifeds8 Oct 30, 2025
f800a13
HBASE-27126 Support multi-threads cleaner for MOB files (#5833)
chandrasekhar-188k Nov 1, 2025
bc54a7e
HBASE-29662 - Avoid regionDir/tableDir creation as part of .regioninf…
gvprathyusha6 Nov 3, 2025
eae2198
HBASE-29686 Compatible issue of HFileOutputFormat2#configureRemoteClu…
mokai87 Nov 4, 2025
9c16588
HBASE-29667 Correct block priority to SINGLE on the first write to th…
Huginn-kio Nov 4, 2025
8ef271f
[ADDENDUM] HBASE-29223 Fix TestMasterStatusUtil (#7416)
PDavid Nov 4, 2025
e2e2676
HBASE-29700 Always close RPC servers in AbstractTestIPC (#7434)
stoty Nov 4, 2025
6e85f12
HBASE-29703 Remove duplicate calls to withNextBlockOnDiskSize (#7440)
liuxiaocs7 Nov 5, 2025
33c4bdc
HBASE-29702 Remove shade plugin from hbase-protocol-shaded (#7438)
stoty Nov 6, 2025
0d3014c
HBASE-28996: Implement Custom ReplicationEndpoint to Enable WAL Backu…
vinayakphegde Feb 18, 2025
0bff7eb
HBASE-29025: Enhance the full backup command to support Continuous Ba…
vinayakphegde Mar 4, 2025
912ef67
HBASE-29210: Introduce Validation for PITR-Critical Backup Deletion (…
vinayakphegde Apr 10, 2025
0e3b5e4
HBASE-29261: Investigate flaw in backup deletion validation of PITR-c…
vinayakphegde May 20, 2025
c4bef9e
HBASE-29133: Implement "pitr" Command for Point-in-Time Restore (#6717)
vinayakphegde May 30, 2025
716dab8
HBASE-29255: Integrate backup WAL cleanup logic with the delete comma…
vinayakphegde Jun 11, 2025
b54da1b
HBASE-28990 Modify Incremental Backup for Continuous Backup (#6788)
ankitsol Jun 20, 2025
393602d
HBASE-29350: Ensure Cleanup of Continuous Backup WALs After Last Back…
vinayakphegde Jun 23, 2025
1a4c610
HBASE-29219 Ignore Empty WAL Files While Consuming Backed-Up WAL File…
vinayakphegde Jun 24, 2025
1a2ff7b
HBASE-29406: Skip Copying Bulkloaded Files to Backup Location in Cont…
vinayakphegde Jun 27, 2025
27ea7b3
HBASE-29449 Update backup describe command for continuous backup (#7045)
ankitsol Jul 15, 2025
37e195a
HBASE-29445 Add Option to Specify Custom Backup Location in PITR (#7153)
vinayakphegde Jul 16, 2025
aa69616
HBASE-29441 ReplicationSourceShipper should delegate the empty wal en…
vinayakphegde Jul 16, 2025
a4cd71a
HBASE-29459 Capture bulkload files only till IncrCommittedWalTs durin…
ankitsol Jul 22, 2025
3c5c999
HBASE-29310 Handle Bulk Load Operations in Continuous Backup (#7150)
ankitsol Jul 23, 2025
fa6b83f
HBASE-28957 spotless apply after rebase
vinayakphegde Jul 29, 2025
3044b11
HBASE-29375 Add Unit Tests for BackupAdminImpl and Improve Test Granu…
vinayakphegde Jul 29, 2025
176e8c6
HBASE-29519 Copy Bulkloaded Files in Continuous Backup (#7222)
vinayakphegde Aug 20, 2025
5d815b8
HBASE-29524 Handle bulk-loaded HFiles in delete and cleanup process (…
vinayakphegde Aug 26, 2025
29c228a
[HBASE-29520] Utilize Backed-up Bulkloaded Files in Incremental Backu…
ankitsol Sep 8, 2025
9bd36d0
Revert "HBASE-29310 Handle Bulk Load Operations in Continuous Backup …
anmolnar Sep 11, 2025
26f51a0
HBASE-29521: Update Restore Command to Handle Bulkloaded Files (#7300)
vinayakphegde Sep 25, 2025
12e1292
HBASE-29656 Scan WALs to identify bulkload operations for incremental…
ankitsol Oct 27, 2025
6aa212f
HBASE-28957. Build + spotless fix
anmolnar Nov 6, 2025
480fe04
HBASE-29826: Backup merge is failing because .backup.manifest cannot …
kgeisz Jan 23, 2026
fc752f7
HBASE-29825: Incremental backup is failing due to incorrect timezone …
kgeisz Feb 4, 2026
77962a3
HBASE-29687: Extend IntegrationTestBackupRestore to handle continuous…
kgeisz Nov 5, 2025
01d74a1
HBASE-29815: Fix issue where backup integration tests are not running…
kgeisz Jan 15, 2026
15487a6
Merge branch 'HBASE-29164' into HBASE-29164_rebased
kgeisz Feb 4, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
4 changes: 2 additions & 2 deletions dev-support/Jenkinsfile
Original file line number Diff line number Diff line change
Expand Up @@ -59,8 +59,8 @@ pipeline {
ASF_NIGHTLIES_BASE_ORI = "${ASF_NIGHTLIES}/hbase/${JOB_NAME}/${BUILD_NUMBER}"
ASF_NIGHTLIES_BASE = "${ASF_NIGHTLIES_BASE_ORI.replaceAll(' ', '%20')}"
// These are dependent on the branch
HADOOP3_VERSIONS = "3.3.5,3.3.6,3.4.0,3.4.1"
HADOOP3_DEFAULT_VERSION = "3.4.1"
HADOOP3_VERSIONS = "3.3.5,3.3.6,3.4.0,3.4.1,3.4.2"
HADOOP3_DEFAULT_VERSION = "3.4.2"
}
parameters {
booleanParam(name: 'USE_YETUS_PRERELEASE', defaultValue: false, description: '''Check to use the current HEAD of apache/yetus rather than our configured release.
Expand Down
2 changes: 1 addition & 1 deletion dev-support/create-release/release-util.sh
Original file line number Diff line number Diff line change
Expand Up @@ -969,7 +969,7 @@ function get_hadoop3_version() {
# case spotless:check failure, so we should run spotless:apply before committing
function maven_spotless_apply() {
# our spotless plugin version requires at least java 11 to run, so we use java 17 here
JAVA_HOME="/usr/lib/jvm/java-17-openjdk-amd64" "${MVN[@]}" spotless:apply
JAVA_HOME="${JAVA17_HOME}" "${MVN[@]}" spotless:apply
}

function git_add_poms() {
Expand Down
8 changes: 4 additions & 4 deletions dev-support/hbase-personality.sh
Original file line number Diff line number Diff line change
Expand Up @@ -612,17 +612,17 @@ function hadoopcheck_rebuild
# TODO remove this on non 2.5 branches ?
yetus_info "Setting Hadoop 3 versions to test based on branch-2.5 rules"
if [[ "${QUICK_HADOOPCHECK}" == "true" ]]; then
hbase_hadoop3_versions="3.2.4 3.3.6 3.4.0"
hbase_hadoop3_versions="3.2.4 3.3.6 3.4.1"
else
hbase_hadoop3_versions="3.2.3 3.2.4 3.3.2 3.3.3 3.3.4 3.3.5 3.3.6 3.4.0"
hbase_hadoop3_versions="3.2.3 3.2.4 3.3.2 3.3.3 3.3.4 3.3.5 3.3.6 3.4.0 3.4.1"
fi
else
yetus_info "Setting Hadoop 3 versions to test based on branch-2.6+/master/feature branch rules"
# Isn't runnung these tests with the default Hadoop version redundant ?
if [[ "${QUICK_HADOOPCHECK}" == "true" ]]; then
hbase_hadoop3_versions="3.3.6 3.4.0"
hbase_hadoop3_versions="3.3.6 3.4.1"
else
hbase_hadoop3_versions="3.3.5 3.3.6 3.4.0"
hbase_hadoop3_versions="3.3.5 3.3.6 3.4.0 3.4.1"
fi
fi

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -36,4 +36,5 @@
* @see org.apache.hadoop.hbase.testclassification.VerySlowMapReduceTests
*/
public interface ClientTests {
public static final String TAG = "org.apache.hadoop.hbase.testclassification.ClientTests";
}
Original file line number Diff line number Diff line change
Expand Up @@ -35,4 +35,5 @@
* @see org.apache.hadoop.hbase.testclassification.VerySlowMapReduceTests
*/
public interface CoprocessorTests {
public static final String TAG = "org.apache.hadoop.hbase.testclassification.CoprocessorTests";
}
Original file line number Diff line number Diff line change
Expand Up @@ -35,4 +35,5 @@
* @see org.apache.hadoop.hbase.testclassification.VerySlowMapReduceTests
*/
public interface FilterTests {
public static final String TAG = "org.apache.hadoop.hbase.testclassification.FilterTests";
}
Original file line number Diff line number Diff line change
Expand Up @@ -35,4 +35,5 @@
* @see org.apache.hadoop.hbase.testclassification.VerySlowMapReduceTests
*/
public interface FlakeyTests {
public static final String TAG = "org.apache.hadoop.hbase.testclassification.FlakeyTests";
}
Original file line number Diff line number Diff line change
Expand Up @@ -36,4 +36,5 @@
* @see org.apache.hadoop.hbase.testclassification.VerySlowMapReduceTests
*/
public interface IOTests {
public static final String TAG = "org.apache.hadoop.hbase.testclassification.IOTests";
}
Original file line number Diff line number Diff line change
Expand Up @@ -34,4 +34,5 @@
* @see LargeTests
*/
public interface IntegrationTests {
public static final String TAG = "org.apache.hadoop.hbase.testclassification.IntegrationTests";
}
Original file line number Diff line number Diff line change
Expand Up @@ -33,4 +33,5 @@
* @see IntegrationTests
*/
public interface LargeTests {
public static final String TAG = "org.apache.hadoop.hbase.testclassification.LargeTests";
}
Original file line number Diff line number Diff line change
Expand Up @@ -35,4 +35,5 @@
* @see org.apache.hadoop.hbase.testclassification.VerySlowMapReduceTests
*/
public interface MapReduceTests {
public static final String TAG = "org.apache.hadoop.hbase.testclassification.MapReduceTests";
}
Original file line number Diff line number Diff line change
Expand Up @@ -35,4 +35,5 @@
* @see org.apache.hadoop.hbase.testclassification.VerySlowMapReduceTests
*/
public interface MasterTests {
public static final String TAG = "org.apache.hadoop.hbase.testclassification.MasterTests";
}
Original file line number Diff line number Diff line change
Expand Up @@ -32,4 +32,5 @@
* @see IntegrationTests
*/
public interface MediumTests {
public static final String TAG = "org.apache.hadoop.hbase.testclassification.MediumTests";
}
Original file line number Diff line number Diff line change
Expand Up @@ -21,4 +21,5 @@
* Tag a test that covers our metrics handling.
*/
public interface MetricsTests {
public static final String TAG = "org.apache.hadoop.hbase.testclassification.MetricsTests";
}
Original file line number Diff line number Diff line change
Expand Up @@ -35,4 +35,5 @@
* @see org.apache.hadoop.hbase.testclassification.VerySlowMapReduceTests
*/
public interface MiscTests {
public static final String TAG = "org.apache.hadoop.hbase.testclassification.MiscTests";
}
Original file line number Diff line number Diff line change
Expand Up @@ -35,4 +35,5 @@
* @see org.apache.hadoop.hbase.testclassification.VerySlowMapReduceTests
*/
public interface RPCTests {
public static final String TAG = "org.apache.hadoop.hbase.testclassification.RPCTests";
}
Original file line number Diff line number Diff line change
Expand Up @@ -21,4 +21,5 @@
* Tag the tests related to rs group feature.
*/
public interface RSGroupTests {
public static final String TAG = "org.apache.hadoop.hbase.testclassification.RSGroupTests";
}
Original file line number Diff line number Diff line change
Expand Up @@ -35,4 +35,5 @@
* @see org.apache.hadoop.hbase.testclassification.VerySlowMapReduceTests
*/
public interface RegionServerTests {
public static final String TAG = "org.apache.hadoop.hbase.testclassification.RegionServerTests";
}
Original file line number Diff line number Diff line change
Expand Up @@ -35,4 +35,5 @@
* @see org.apache.hadoop.hbase.testclassification.VerySlowMapReduceTests
*/
public interface ReplicationTests {
public static final String TAG = "org.apache.hadoop.hbase.testclassification.ReplicationTests";
}
Original file line number Diff line number Diff line change
Expand Up @@ -35,4 +35,5 @@
* @see org.apache.hadoop.hbase.testclassification.VerySlowMapReduceTests
*/
public interface RestTests {
public static final String TAG = "org.apache.hadoop.hbase.testclassification.RestTests";
}
Original file line number Diff line number Diff line change
Expand Up @@ -35,4 +35,5 @@
* @see org.apache.hadoop.hbase.testclassification.VerySlowMapReduceTests
*/
public interface SecurityTests {
public static final String TAG = "org.apache.hadoop.hbase.testclassification.SecurityTests";
}
Original file line number Diff line number Diff line change
Expand Up @@ -30,4 +30,5 @@
* @see IntegrationTests
*/
public interface SmallTests {
public static final String TAG = "org.apache.hadoop.hbase.testclassification.SmallTests";
}
Original file line number Diff line number Diff line change
Expand Up @@ -36,4 +36,6 @@
* @see org.apache.hadoop.hbase.testclassification.VerySlowMapReduceTests
*/
public interface VerySlowMapReduceTests {
public static final String TAG =
"org.apache.hadoop.hbase.testclassification.VerySlowMapReduceTests";
}
Original file line number Diff line number Diff line change
Expand Up @@ -36,4 +36,6 @@
* @see org.apache.hadoop.hbase.testclassification.VerySlowMapReduceTests
*/
public interface VerySlowRegionServerTests {
public static final String TAG =
"org.apache.hadoop.hbase.testclassification.VerySlowRegionServerTests";
}
Original file line number Diff line number Diff line change
Expand Up @@ -22,4 +22,5 @@
* {@code RecoverableZooKeeper}, not for tests which depend on ZooKeeper.
*/
public interface ZKTests {
public static final String TAG = "org.apache.hadoop.hbase.testclassification.ZKTests";
}
Original file line number Diff line number Diff line change
Expand Up @@ -52,10 +52,13 @@ public class BackupHFileCleaner extends BaseHFileCleanerDelegate implements Abor
private boolean stopped = false;
private boolean aborted = false;
private Connection connection;
// timestamp of most recent read from backup system table
private long prevReadFromBackupTbl = 0;
// timestamp of 2nd most recent read from backup system table
private long secondPrevReadFromBackupTbl = 0;
// timestamp of most recent completed cleaning run
private volatile long previousCleaningCompletionTimestamp = 0;

@Override
public void postClean() {
previousCleaningCompletionTimestamp = EnvironmentEdgeManager.currentTime();
}

@Override
public Iterable<FileStatus> getDeletableFiles(Iterable<FileStatus> files) {
Expand All @@ -79,12 +82,12 @@ public Iterable<FileStatus> getDeletableFiles(Iterable<FileStatus> files) {
return Collections.emptyList();
}

secondPrevReadFromBackupTbl = prevReadFromBackupTbl;
prevReadFromBackupTbl = EnvironmentEdgeManager.currentTime();
// Pin the threshold, we don't want the result to change depending on evaluation time.
final long recentFileThreshold = previousCleaningCompletionTimestamp;

return Iterables.filter(files, file -> {
// If the file is recent, be conservative and wait for one more scan of the bulk loads
if (file.getModificationTime() > secondPrevReadFromBackupTbl) {
if (file.getModificationTime() > recentFileThreshold) {
LOG.debug("Preventing deletion due to timestamp: {}", file.getPath().toString());
return false;
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1668,9 +1668,7 @@ public static void restoreFromSnapshot(Connection conn) throws IOException {
try (Admin admin = conn.getAdmin()) {
String snapshotName = BackupSystemTable.getSnapshotName(conf);
if (snapshotExists(admin, snapshotName)) {
admin.disableTable(BackupSystemTable.getTableName(conf));
admin.restoreSnapshot(snapshotName);
admin.enableTable(BackupSystemTable.getTableName(conf));
admin.restoreBackupSystemTable(snapshotName);
LOG.debug("Done restoring backup system table");
} else {
// Snapshot does not exists, i.e completeBackup failed after
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,6 @@
import org.apache.hadoop.hbase.backup.BackupRequest;
import org.apache.hadoop.hbase.backup.BackupRestoreFactory;
import org.apache.hadoop.hbase.backup.BackupType;
import org.apache.hadoop.hbase.backup.master.LogRollMasterProcedureManager;
import org.apache.hadoop.hbase.backup.util.BackupUtils;
import org.apache.hadoop.hbase.client.Admin;
import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor;
Expand Down Expand Up @@ -204,7 +203,7 @@ private void handleContinuousBackup(Admin admin) throws IOException {

private void handleNonContinuousBackup(Admin admin) throws IOException {
initializeBackupStartCode(backupManager);
performLogRoll(admin);
performLogRoll();
performBackupSnapshots(admin);
backupManager.addIncrementalBackupTableSet(backupInfo.getTables());

Expand All @@ -228,18 +227,14 @@ private void initializeBackupStartCode(BackupManager backupManager) throws IOExc
}
}

private void performLogRoll(Admin admin) throws IOException {
private void performLogRoll() throws IOException {
// We roll log here before we do the snapshot. It is possible there is duplicate data
// in the log that is already in the snapshot. But if we do it after the snapshot, we
// could have data loss.
// A better approach is to do the roll log on each RS in the same global procedure as
// the snapshot.
LOG.info("Execute roll log procedure for full backup ...");
Map<String, String> props = new HashMap<>();
props.put("backupRoot", backupInfo.getBackupRootDir());
admin.execProcedure(LogRollMasterProcedureManager.ROLLLOG_PROCEDURE_SIGNATURE,
LogRollMasterProcedureManager.ROLLLOG_PROCEDURE_NAME, props);

BackupUtils.logRoll(conn, backupInfo.getBackupRootDir(), conf);
newTimestamps = backupManager.readRegionServerLastLogRollResult();
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,6 @@

import java.io.IOException;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import org.apache.hadoop.conf.Configuration;
Expand All @@ -29,9 +28,7 @@
import org.apache.hadoop.fs.PathFilter;
import org.apache.hadoop.hbase.HConstants;
import org.apache.hadoop.hbase.TableName;
import org.apache.hadoop.hbase.backup.master.LogRollMasterProcedureManager;
import org.apache.hadoop.hbase.backup.util.BackupUtils;
import org.apache.hadoop.hbase.client.Admin;
import org.apache.hadoop.hbase.client.Connection;
import org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore;
import org.apache.hadoop.hbase.util.CommonFSUtils;
Expand Down Expand Up @@ -84,13 +81,8 @@ public Map<String, Long> getIncrBackupLogFileMap() throws IOException {
}

LOG.info("Execute roll log procedure for incremental backup ...");
HashMap<String, String> props = new HashMap<>();
props.put("backupRoot", backupInfo.getBackupRootDir());
BackupUtils.logRoll(conn, backupInfo.getBackupRootDir(), conf);

try (Admin admin = conn.getAdmin()) {
admin.execProcedure(LogRollMasterProcedureManager.ROLLLOG_PROCEDURE_SIGNATURE,
LogRollMasterProcedureManager.ROLLLOG_PROCEDURE_NAME, props);
}
newTimestamps = readRegionServerLastLogRollResult();

logList = getLogFilesForNewBackup(previousTimestampMins, newTimestamps, conf, savedStartCode);
Expand Down
Loading