Skip to content

Commit 3e425da

Browse files
committed
Refactor removeDuplicates
1 parent 403c5c4 commit 3e425da

File tree

1 file changed

+14
-26
lines changed

1 file changed

+14
-26
lines changed
Lines changed: 14 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -1,36 +1,24 @@
11
/**
22
* Remove duplicate values from a sequence, preserving the order of the first occurrence of each value.
33
*
4-
* Time Complexity:
5-
* Space Complexity:
6-
* Optimal Time Complexity:
7-
*
4+
* Time Complexity:before refactoring O(n²)- after refactoring O(n)
5+
* Space Complexity:O(n)
6+
* Optimal Time Complexity:O(n)
7+
8+
* The original implementation relied on nested loops to detect duplicates,
9+
*resulting in O(n²) time complexity, which does not scale well for large inputs.
10+
*The refactored solution uses a Set to perform duplicate checks in constant time.
11+
*This removes the need for an inner loop and reduces the overall time complexity
12+
*to O(n), which is optimal for this problem.
13+
*using https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Set
814
* @param {Array} inputSequence - Sequence to remove duplicates from
915
* @returns {Array} New sequence with duplicates removed
1016
*/
1117
export function removeDuplicates(inputSequence) {
12-
const uniqueItems = [];
18+
const uniqueItemSet = new Set;
1319

14-
for (
15-
let currentIndex = 0;
16-
currentIndex < inputSequence.length;
17-
currentIndex++
18-
) {
19-
let isDuplicate = false;
20-
for (
21-
let compareIndex = 0;
22-
compareIndex < uniqueItems.length;
23-
compareIndex++
24-
) {
25-
if (inputSequence[currentIndex] === uniqueItems[compareIndex]) {
26-
isDuplicate = true;
27-
break;
28-
}
29-
}
30-
if (!isDuplicate) {
31-
uniqueItems.push(inputSequence[currentIndex]);
32-
}
20+
for (let i=0 ;i<inputSequence.length;i++){
21+
uniqueItemSet.add(inputSequence[i]);
3322
}
34-
35-
return uniqueItems;
23+
return [...uniqueItemSet];
3624
}

0 commit comments

Comments
 (0)