Skip to content

Conversation

@NimaSarajpoor
Copy link
Collaborator

I have been trying to better understand how sliding-dot-product and convolution are related, and what type of convolution we are talking about... and how that convolution is related to mode 'valid' in scipy's convolve. And, eventually, how oaconvolve works. So, I decided to prepare a short tutorial to help me understand these components.

@seanlaw
I've created a .md file (draft version). I then noticed there are bunch of stuff that a reader may want to execute. So, I am thinking of moving the context into a jupyter notebook.

@gitnotebooks
Copy link

gitnotebooks bot commented Jan 12, 2026

@seanlaw
Copy link
Contributor

seanlaw commented Jan 12, 2026

I was hoping that you'd write a tutorial notebook! Glad that we're on the same page.

@review-notebook-app
Copy link

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

@NimaSarajpoor
Copy link
Collaborator Author

@seanlaw
Can you please checkout the added tutorial in this PR, and let me know if its story is easy to follow? Please let me know if you think I should restructure it.

@NimaSarajpoor NimaSarajpoor requested a review from seanlaw January 15, 2026 03:50
Copy link
Contributor

@seanlaw seanlaw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@NimaSarajpoor I think that the story isn't clear and it's probably because you are using too much text/code to motivate your point, which results in your point be lost/hard to follow.

I want to draw your attention to the original Matrix Profile Tutorial where we leverage a ton of visuals to help explain the concepts. It's certainly a lot more work but notice that, at most, we have 6 lines of code within a cell but that code is trivial to follow and the visuals help guide us step-by-step.

In my mind, I think Figure 1 from the MASS paper (and variations of it) will help people "see" your points more clearly and how each method is the same/different. Right now, it feels like you are jumping from one concept to another and then hoping that, by providing code, the reader will be convinced all on their own. Instead, you should focus on one singular (clear) point and prove your point before moving on (i.e., build things up one-clear-step-at-a-time!).

"id": "fde882c7-0c2b-4ed5-a95e-12089419f452",
"metadata": {},
"source": [
"One way to compute the sliding-dot-product (sdp) between a query Q and a time series T is\n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I recommend using "SDP" instead of "sdp" because it is visually distinct.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The second sentence is really, really hard to read.

"id": "261141da-ff09-4ae8-a39e-0f5c4b518836",
"metadata": {},
"source": [
"The assertion is passing. This confirms that the fft-based method for convolving the two arrays gives the same result as the circular convolution. To the best of my knowledge,there is no function in numpy or scipy to give us the circular convolution. `scipy.signal.fftconvolve` computes the linear convolution which result in differen output. However, the good news is that the slice `M-1 : N` still gives the sliding dot product. "
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The assertion isn't super convincing. Maybe if you demonstrate that it holds for multiple lengths?

"id": "e5189de2-0b97-4f50-a30a-060b54d2a65b",
"metadata": {},
"source": [
"It can be shown that sdp can be computed via convolution. The general formula of [the discrete convolution](https://en.wikipedia.org/wiki/Convolution#Discrete_convolution) between two signals is as follows:\n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You have still yet to define convolution and it feels like you're already jumping into a factual statement of "it can be shown..."

If you actually describe what a convolution is upfront then it should become obvious that it looks similar to SDP

Copy link
Collaborator Author

@NimaSarajpoor NimaSarajpoor Jan 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you actually describe what a convolution is upfront

I think I shouldn't have started with describing SDP as it should be safe to assume that the audience knows about that. As you suggested, I may start with defining convolution, then at some point simply shows how sliding-dot-product of Q, T is equivalent to "valid" convolution of Q[::-1] and T, and that should suffice for SDP. Then, I can keep the focus on the "valid" convolution" till the end of the tutorial. (I mean..no need to keep talking about SDP in different places or computing it in different examples. I think this should help reader do not lose their focus)

"source": [
"It can be shown that sdp can be computed via convolution. The general formula of [the discrete convolution](https://en.wikipedia.org/wiki/Convolution#Discrete_convolution) between two signals is as follows:\n",
"\n",
"$$ (x * h)[i] = \\sum_{j=-\\infty}^{j=+\\infty}{x[j]h[i-j]} $$\n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what is x and h and where did it come from? How are they related to Q and T?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what is x and h and where did it come from?

Will fix that when defining the convolution.

How are they related to Q and T?

I will add that explanation when I want to show the connection between convolution and SDP.

then at some point simply shows how sliding-dot-product of Q, T is equivalent to "valid" convolution of Q[::-1] and T, and that should suffice for SDP. Then, I can keep the focus on the "valid" convolution" till the end of the tutorial.

"$$ (x * h)[i] = \\sum_{j=-\\infty}^{j=+\\infty}{x[j]h[i-j]} $$\n",
"\n",
"\n",
"In our case, we are working with signals with finite legnths, meaning the values of an out-of-range index is zero. Let's try this for our example. Note that for a given index `i`, `h[i-j]` is moving backward as `j` increases. In other words, convolution reverse one of the signals. Therefore, we also flip `Q` before applying convolution to it! So when it is flipped again, it gives us what the correct sliding dot product.\n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This sounds rather technical and, maybe unnecessary?

I tried reading this multiple times and the notation seems very confusing. Instead of starting with an equation, I wonder if it's possible to simply start with a set of steps/instructions and then show that it can extend the instructions to longer and longer arrays (i.e., it becomes the equation)?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried reading this multiple times and the notation seems very confusing

Regarding the math formula, particularly the sum over j for a given i ? Because I do have a problem with that too. I couldn't keep that whole thing in my mind. I will think about the following avenues:

I wonder if it's possible to simply start with a set of steps/instructions and then show that it can extend the instructions to longer and longer arrays (i.e., it becomes the equation)?

want to draw your attention to the original Matrix Profile Tutorial where we leverage a ton of visuals to help explain the concepts.

@NimaSarajpoor
Copy link
Collaborator Author

... using too much text/code to motivate your point, which results in your point be lost/hard to follow.

I want to draw your attention to the original Matrix Profile Tutorial where we leverage a ton of visuals to help explain the concepts. It's certainly a lot more work but notice that, at most, we have 6 lines of code within a cell but that code is trivial to follow and the visuals help guide us step-by-step.

Agree... It feels like I have not been staying on one single line, and instead making small jumps to left and right.

In my mind, I think Figure 1 from the MASS paper (and variations of it) will help people "see" your points more clearly and how each method is the same/different.

I will also read that paper. I think it should help me structure my thoughts and improve the flow of this tutorial.

@NimaSarajpoor
Copy link
Collaborator Author

NimaSarajpoor commented Jan 15, 2026

@seanlaw
FYI: A few comments of yours that appeared on this PR page shows 2 days ago. Did you review at that time as well?! There are some that shows an hour ago or so... and if I go to the "Files changed" section, I can see those comments there too. Cannot see any of your comments when I click on ReviewNB though

@seanlaw
Copy link
Contributor

seanlaw commented Jan 15, 2026

FYI: A few comments of yours that appeared on this PR page shows 2 days ago. Did you review at that time as well?!

Yes, I started the other day but wanted to let you finish since I knew you weren't done. I am trying to move away from ReviewNB for reviewing notebooks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants