Skip to content

ENH: add maxlag and lags parameters to correlate/convolve#31261

Open
HoniSanders wants to merge 13 commits intonumpy:mainfrom
HoniSanders:maxlag2025
Open

ENH: add maxlag and lags parameters to correlate/convolve#31261
HoniSanders wants to merge 13 commits intonumpy:mainfrom
HoniSanders:maxlag2025

Conversation

@HoniSanders
Copy link
Copy Markdown

@HoniSanders HoniSanders commented Apr 16, 2026

PR summary

This PR adds maxlag and lags keyword parameters to np.correlate and np.convolve, plus a returns_lagvector flag that returns the corresponding lag indices alongside the result. It also adds a new public C API function PyArray_CorrelateLags.

What was troubling me is that numpy.correlate does not have a maxlag feature. This means that even if I only want to see correlations between two time series with lags between -100 and +100 ms, for example, it will still calculate the correlation for every lag between -20000 and +20000 ms (which is the length of the time series).

This PR revives #5978, which I opened back in 2015 and never finished pushing through. I had introduced this question as issues on numpy and scipy: #5954, scipy/scipy#4940 and on the scipy-dev list. It also got attention on stackoverflow at the time. The new issue that this resolves is #17286.

Apologies for the very long downtime. I'm excited to be able to come back to this and finally get this much-needed feature included.
Also, I plan on squashing the commits in this branch and naming the single commit to be the same as the name of this PR. I am expecting the review process to require more commits so pushed my branch as is. I hope this is ok.

Proposed API

np.correlate(a, v, mode=..., *, maxlag=None, lags=None, returns_lagvector=False)
np.convolve (a, v, mode=..., *, maxlag=None, lags=None, returns_lagvector=False)

Parameter Design

  • maxlag=M (int): symmetric inclusive window [-M, M] (2M+1 lags). Matches MATLAB's xcorr(x, y, M) convention.
  • lags=: a range, a slice with explicit start/stop, or a 1-D integer array_like containing an arithmetic progression.

Both keyword-only, mutually exclusive. Default mode auto-resolves to 'lags' when either is supplied.

  • returns_lagvector=True → returns (result, lagvector).

Major file changes

  • Python (numpy/_core/numeric.py): new maxlag/lags/returns_lagvector kwargs on correlate and convolve; helper functions _lags_from_maxlag, _lags_from_lags, _lags_from_mode.
  • C (numpy/_core/src/multiarray/multiarraymodule.c): new PyArray_CorrelateLags public C API; _pyarray_correlate implements non-mode lags and is now the single normalization site (handles array swap, negative-step normalization, output reversal internally — callers pass any valid form).

Benchmark

For a user interested in a small number of lags around 0, there is a huge speedup:

size1 size2 full + slice maxlag=5 speedup
1000 100 47.2 µs 11.1 µs 4.3×
1000 1000 572 µs 20.2 µs 28×
100000 100 4.09 ms 14.0 µs 290×
100000 1000 60.0 ms 23.4 µs 2500×

(From benchmarks/benchmarks/bench_core.py::CorrConvLags, included in this PR.)

First time committer introduction

Hi! I have used NumPy for more than a decade now, both directly, and as a key supporter of the whole scientific python ecosystem (e.g. pandas, pytorch, etc). When I started working on this in 2015, I was only using NumPy for the first time, at the time to build simulations of biological neural networks that were more performant and open source than Matlab, which I had previously been using. This was a feature that would have helped me quite a bit at the time, as I was cross-correlating very long time series, but only needed a relatively small window of time lags around 0. I have wanted to come back to it because of the continued interest of others even though I myself no longer need this specific functionality.

AI Disclosure

Claude Code is what finally allowed me to finally bring this PR to completion.

I used Claude as a pair-programming assistant for:

  • chasing down bugs (in particular it helped me find a buffer overflow that the earlier PR attempt had introduced in _pyarray_correlate);
  • proposing a more intuitive argument list for the Python API that I approved;
  • refactoring the vector inversion logic so the C function would be the single normalization site;
  • proposing a more complete structure of the test cases (geometry/equivalence/dtype groupings) -- the expected values in the tests were computed by hand or by running and sub-slicing the pre-existing implementation.
  • writing some of the docstrings (Parameters/Returns wording)

Brings the maxlag2025 branch up to date with current numpy/numpy main
(843 commits since branch base).

Conflict resolution in numpy/_core/src/multiarray/multiarraymodule.c:
- Adopt upstream's _pyarray_correlate signature change from
  `int typenum` to `PyArray_Descr *typec` (PR numpy#30931).
- Keep our refactor: the function still takes (minlag, maxlag, lagstep)
  instead of (mode, *inverted), and handles array swap, negative-step
  normalization, and output reversal internally.
- Update PyArray_Correlate, PyArray_Correlate2, and PyArray_CorrelateLags
  to use upstream's PyArray_DTypeFromObject + NPY_DT_CALL_ensure_canonical
  pattern for type resolution, with proper Py_DECREF(typec) cleanup.
- Update the new array_correlatelags argument parser to use upstream's
  brace-syntax for npy_parse_arguments.

Add NPY_2_6_API_VERSION 0x00000016 and matching version-string clause
in numpy/_core/include/numpy/numpyconfig.h to support the bumped
C API version that registers PyArray_CorrelateLags at slot 369.

Tests: numpy/_core/tests/test_numeric.py: 512 passed, 1 skipped.
@HoniSanders HoniSanders marked this pull request as draft April 16, 2026 21:14
@HoniSanders HoniSanders marked this pull request as ready for review April 17, 2026 02:24
@ngoldbaum ngoldbaum added 01 - Enhancement 56 - Needs Release Note. Needs an entry in doc/release/upcoming_changes 54 - Needs decision labels Apr 17, 2026
@ngoldbaum
Copy link
Copy Markdown
Member

Thanks for the PR, I just enabled CI. If it fails you may need to enable CI on your fork to get CI passing in a timely manner.

I don't have time to go through the backlog of discussion about this and am not registering an opinion about whether we want to add this or not. That said, generally NumPy is pretty conservative about adding new keywords and functions, since the API surface is already very large and complicated.

One of the rules we have is that new features require sending a message to the numpy-discussion mailing list with a proposal to add the new feature. That gives the wider contributor base who don't actively follow github a chance to weigh in.

We're also generally conservative about adding new things to the C API, so your proposal to add a new C API item needs some justification. Keep in mind that basically no one will be able to use the NumPy 2.6 C API until perhaps 2028 due to projects keeping support for older NumPy releases. Is it really worth it to add the new C API? You'll need to explain why we should add the function.

Would you mind sending such a message to the mailing list about this proposal?

@HoniSanders
Copy link
Copy Markdown
Author

Thanks. I did post about this on the numpy-discussion list at the time (see https://mail.python.org/archives/list/numpy-discussion@python.org/thread/JMXNOCFQRFJYISFKSRI7MRL7GPK5ZD4S/#JMXNOCFQRFJYISFKSRI7MRL7GPK5ZD4S and especially https://mail.python.org/archives/list/numpy-discussion@python.org/thread/6FC27XPOD4SBH3KJ7VY5DDAPS5Q46GPO/#6FC27XPOD4SBH3KJ7VY5DDAPS5Q46GPO where several people chimed in saying this would be useful). I figured that was sufficient, but can re-post since it's been a while. You can also see several people hopping in to support the feature on the previous PR and closed issue.

As for the C API question, I believe it needs a new C API function because the core vector calculation implementation needs to be in C and the existing arguments are not sufficient to support the new implementation. Feel free to let me know if there is another way for the python function to call a modified version of the existing C functions.

@ngoldbaum
Copy link
Copy Markdown
Member

I figured that was sufficient, but can re-post since it's been a while. You can also see several people hopping in to support the feature on the previous PR and closed issue.

Ah, I didn't search the archives. I'll take a look at those discussions and let you know if I think a new mailing list ping is warranted.

As for the C API question, I believe it needs a new C API function because the core vector calculation implementation needs to be in C and the existing arguments are not sufficient to support the new implementation.

But why does the new function need to be publicly exposed in the C API? Can't NumPy just call an internal C function?

Feel free to let me know if there is another way for the python function to call a modified version of the existing C functions.

You could not expose it publicly and instead just use the new C implementation internally.

@HoniSanders
Copy link
Copy Markdown
Author

oh sure, I didn't realize limiting it to internal use was an option. I don't know who uses the public C API. I guess the value of having the public C API function would just be to expose the functionality to whoever uses the C API as long as we are building the functionality anyway. But I certainly wouldn't want the question of the C API to delay the inclusion of the functionality in the python API

@ngoldbaum
Copy link
Copy Markdown
Member

Oh great, good it was just a misconception. We definitely add features to the Python API without adding equivalent features in the public C API.

@ngoldbaum
Copy link
Copy Markdown
Member

I just took a look and given that it’s been more than ten years since the mailing list discussion, I do think a new ping is warranted. When you send your message, please include context about prior discussions and how this proposal differs from earlier proposals, if it does.

Also, I haven’t checked but it would also be good to see what other array libraries do and what the array API standard says about this case. If adding this makes us more conformant with the array API or otherwise makes it easier to write code that is agnostic to the underlying array API, that would also argue towards including this.

However, if this is an entirely new feature that only NumPy would implement, that makes it a harder sell.

Also please feel free to come to a NumPy community meeting to chat face-to-face about this over Zoom:

https://scientific-python.org/calendars/

@HoniSanders
Copy link
Copy Markdown
Author

Ok, I will send an email to the list, and I will try to attend this week's zoom.

What do you mean by "other array libraries" and "array API standard"?

@rkern
Copy link
Copy Markdown
Member

rkern commented Apr 20, 2026

Array API Standard. In particular, note the Stakeholders which are those other array libraries.

convolve and correlate are not currently part of the standard, but those other libraries often do implement numpy functions that aren't in the standard yet. Changes to our implementation do have a "blast radius" in that it will either cause those libraries to do more work to follow numpy's choices or suffer greater divergence from numpy.

You may want to consider the option to implement more specialized functions in scipy.signal instead of modifying np.convolve and np.correlate. In particular, you could avoid needing returns_lagvector: we do not want to use this pattern of optional returns if we can at all avoid it. We will go to great lengths to avoid it.

@HoniSanders
Copy link
Copy Markdown
Author

Thanks for the pointer.
As you mention, convolve and correlate are not part of that standard, so it shouldn't break anything. The change I propose is backwards compatible so it will only cause more work for other libraries if they want to include this additional feature (which uses optional arguments). In that case it will be less work for them, as they only have to pass through instead of implementing from scratch themselves. I didn't find array libraries with conflicting implementations.
I found several related implementation ideas:

If you want, we can remove the returns_lagvector argument. That was a suggestion proposed in the first round of getting feedback on this PR ten years ago. We could have a separate function to calculate lags like scipy has if you prefer.

@HoniSanders
Copy link
Copy Markdown
Author

I pushed a commit to remove the returns_lagvector argument and instead have a companion function like scipy has.

@rkern
Copy link
Copy Markdown
Member

rkern commented Apr 20, 2026

As you mention, convolve and correlate are not part of that standard, so it shouldn't break anything. The change I propose is backwards compatible so it will only cause more work for other libraries if they want to include this additional feature (which uses optional arguments).

That's not quite the logic at play here. It's not like a dependency (e.g. scipy depends on numpy, if a new optional argument is added in a backwards-compatible fashion, that's fine because all of the existing usages of np.convolve() inside of scipy still work). Adding a new optional argument would still affect any of these libraries that provide a convolve() or correlate() that attempt to follow numpy's current signature. That's because if numpy users use the new arguments, that code will not be portable to those array implementations by swapping out np for xp (the placeholder name for the other-array-library namespace). So those other implementations still face the choice to do the work to add the new feature or else live with the growing discrepancy. There's a cost either way, and there's nothing about the optionality or backwards compatibility of the new feature that alleviates that cost.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

01 - Enhancement 54 - Needs decision 56 - Needs Release Note. Needs an entry in doc/release/upcoming_changes

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants