London | 25-SDC-NOV | Jesus del Moral | Sprint 1 | Analyse and Refactor Functions#94
London | 25-SDC-NOV | Jesus del Moral | Sprint 1 | Analyse and Refactor Functions#94delmorallopez wants to merge 4 commits intoCodeYourFuture:mainfrom
Conversation
492dc30 to
5bdf795
Compare
| * Optimal Time Complexity: O(n) - We must examine each element at least once to calculate the sum and product, so O(n) is the best we can achieve for this problem. | ||
| * | ||
| * we can reduce the constant factor by eliminating the redundant second loop | ||
| * Instead of two passes (2n operations), we can do both calculations in a single pass (n operations) |
There was a problem hiding this comment.
What do you mean by "two passes == 2n operations" and "1 pass == n operation"?
What are these operations? Does that mean one pass is guaranteed to take half as much time to complete?
The code still needs to perform sum += num and product *= num whether we use one loop or two loops.
There was a problem hiding this comment.
When I said “2n vs n operations”, I meant loop iterations. However, both implementations still perform n additions and n multiplications. The single-loop version reduces loop overhead and improves cache efficiency, but asymptotically both are O(n). One pass is not guaranteed to take half the time, because Big-O ignores constant factors and the arithmetic work remains the same.
There was a problem hiding this comment.
You have a good understanding and the performance is indeed improved when two loops are reduced to one.
However, the improvement is not really "doubled", so in complexity analysis we don't focus on the constant factor.
| common_items.append(i) | ||
| return common_items | ||
|
|
||
| return list(set(first_sequence) & set(second_sequence)) |
No description provided.