-
Notifications
You must be signed in to change notification settings - Fork 62
Description
This is kind of weird, and I'm not sure if it's a bug in xenium or something I should be doing differently, but I figure I'll bring it up anyway.
Basically, I have objects being stored as unique_ptrs in a vyukov_hash_map, and when one of them dies (by being removed from the map) I put something on a michael_scott_queue. I've run into an issue where, when the node is reclaimed and my destructor called, the act of pushing to the queue causes the reclaimer to run again, effectively recursing back into the destructor and causing Bad Things to happen.
I've managed to reproduce it with this example:
#include <xenium/policy.hpp>
#include <xenium/michael_scott_queue.hpp>
#include <xenium/vyukov_hash_map.hpp>
#include <xenium/reclamation/stamp_it.hpp>
#include <cassert>
#include <functional>
#include <iostream>
#include <thread>
xenium::michael_scott_queue<int, xenium::policy::reclaimer<xenium::reclamation::stamp_it>> queue;
struct foo
{
int id;
bool killed{false};
~foo() noexcept
{
std::cout << "[thread:" << std::this_thread::get_id() << "] foo destructor: " << id << "\n";
assert(!killed);
killed = true;
queue.push(id);
}
};
int main()
{
xenium::vyukov_hash_map<int, std::unique_ptr<foo>, xenium::policy::reclaimer<xenium::reclamation::stamp_it>> map;
int batch_size = 200;
int batches = 8;
std::vector<std::thread> threads;
for (int batch = 0; batch < batches; ++batch)
{
threads.emplace_back([&map, batch, batch_size]()
{
int min = batch * batch_size;
int max = (batch + 1) * batch_size;
for (int i = min; i < max; ++i)
map.get_or_emplace(i, std::make_unique<foo>(i));
map.erase(min);
});
}
for (auto&& thread : threads)
thread.join();
}It takes a few runs to make it happen sometimes (and is more likely for bigger values of batch_size and batches), but I do eventually get something like
[thread:139690058348224] foo destructor: 0
[thread:139690033170112] foo destructor: 600
[thread:139690024244928] foo destructor: 800
[thread:139690024244928] foo destructor: 800
xenium-destructor: xenium-destructor.cpp:21: foo::~foo(): Assertion `!killed' failed.
Aborted
i.e. the destructor called twice from the same thread.
What seems to be happening from looking at the call stack in my debugger is
- Erasing an entry from the map orphans/retires the node, so the reclaimer can eat it
- Pushing something else to the queue causes the reclaimer to run
- Reclaimer identifies that node as OK to kill and begins to do so
- We enter the destructor as part of cleaning up the node
- In the destructor, we try to push something else to the queue
- As part of the push, the reclaimer runs again
- Reclaimer finds that same node again, because it's still alive, because we're in the middle of killing it
- We end up recursing back into the destructor
Is this a bug in stamp_it (or something else)? Or is it not a bug but a quirk of stamp_it and I should just use a different reclaimer for this use case?