Skip to content

Conversation

@drculhane
Copy link
Contributor

@drculhane drculhane commented Dec 9, 2025

Closes #5113

Update -- after a chat with Ryan, I realized that it wasn't really enough to correct the behavior of // without also distributing it. So this version does that, with a new function called floorDivision, which calls a slightly changed floorDivisionHelper.

There are three versions of floorDivision, for the vv, vs and sv cases.

All of the calls to floorDivisionHelper in BinOp, which were zippered foralls, now call floorDivision.

I also changed the calls to floorDivisionHelper in TimeClassMsg.chpl and OperatorMsg.chpl.

Also at Ryan's suggestion, I removed "inline" from the declaration of floorDivisionHelper. I haven't yet addressed ModHelper. That will be in the PR for the behavior of the % operator.


Previously -- I modified floorDivisionHelper to handle more data types, and invoked it in places where chapel had been performing a simple (and incorrect) a/b division.

Copy link
Contributor

@ajpotts ajpotts left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good!

I noticed division by zero doesn't match NumPy still, but I created a separate issue:#5132

Copy link
Collaborator

@1RyanK 1RyanK left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the PR is mostly correct, just a bit of a concern here.

src/BinOp.chpl Outdated
res += divisor;
}
return res;
inline proc floorDivisionHelper(dividend : ?tn, divisor : ?td) : floorDivisionType(tn,td) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel like this is slightly heavier machinery than we need on int, uint, bool. I'm slightly concerned it may impact performance. Similar with modHelper I think

return -1:real;
proc floorDivision (dividend: [?d] ?tn, divisor: [d] ?td, type etype) : [d] etype throws {
var quotient = makeDistArray(d, etype);
coforall loc in Locales do on loc {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think it's quite necessary to make the parallelism so explicit here. Read the first paragraph here, and then the noted equivalency here.
I guess this should more or less be the same thing but I think if we're worried about parallel zippered iteration in terms of performance, I'm pretty sure you can just do a forall over the domain and let Chapel do the heavy lifting on the distributing (technically as long as all arrays are distributed the same, but since we don't really give the user control over the distribution, it should be fine).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Bug Report: Integer Floor Division (//) Misalignment Between Arkouda and NumPy

3 participants