Lock-Free Multiple-Producer & Multiple Consumer - Stack and Queue [on hold]











up vote
-2
down vote

favorite












Update: The dequeue method was reading from Head multiple times.
Update: Dequeue curr pointer can become null when looking for the last node.
Update: I've switched to using C++ std 11 atomics and so far it seems to be working correctly with many producers/consumers.



I am looking for help figuring out the bugs in some lightweight Multiple Producer / Multiple Consumer data structures: Stack, Queue. These are Lock-Free, they only use CAS/__sync_bool_compare_and_swap/compare_exchange_weak for maintaining consistency.



The Stack I'm pretty sure is correct. Push and Pop both seek to modify the same head pointer, and only 1 gets to do so at a time.



The Queue I know something is<s> was not 100% correct. Enqueue and Dequeue appear to work fine. When the throughput goes up, there are times that messages are lost. The Enqueue is the same operation as Push. For Dequeue, the entire queue is walked from head to tail and the last item trimmed off.



I know this means the performance is O(N) to Dequeue, but thats okay as a tradeoff for lock-free MPMC.



I want to validate the correctness of the code that it truly is MPMC safe without using locks.



template<class T>
class LockFreeStack
{
public:
LockFreeStack()
{
Head = NULL;
}
~LockFreeStack(){
}
void Push(T *item)
{
Node<T> * new_node = Cache.Malloc();
new_node->Data=item;
bool done = false;
while(!done)
{
auto head = Head.load();
new_node->Next.store( head);
if( Head.compare_exchange_weak(head,new_node))
{
done = true;
}
}
}

T *Pop()
{
T *returnValue = NULL;
bool done = false;
while(!done)
{
auto head = Head.load();
if(head == NULL)
{
done=true;
}
else
{
Node<T> * curr = head;
if(Head.compare_exchange_weak(head,curr->Next))
{
done = true;
returnValue = curr->Data;
curr->Next =NULL;
Cache.Free(curr);
}

}
}
return returnValue;
}
public:
std::atomic<Node<T> *> Head;

private:
LockFreeMemCache<Node<T> > Cache;
};


The lock free queue was inspired by the original lock free authors. The queue does not maintain a Tail pointer as that would require two simultaneous CAS operations to update the last nodes next pointer and the tail.



template<class T>
class LockFreeQueue
{
public:
LockFreeQueue()
{
Head=NULL;
}
~LockFreeQueue(){
}
void Enqueue(T *item)
{
Node<T> * new_node = Cache.Malloc();
new_node->Data=item;

bool done = false;
while(!done)
{
auto head = Head.load();
new_node->Next = head;
if(Head.compare_exchange_weak(head,new_node))
{
done = true;
}
}
}
T *Dequeue()
{
T *returnValue=NULL;
bool done = false;
while(!done)
{
auto head = Head.load();
if(head == NULL)
{
done = true;
}
else
{
std::atomic<Node<T> *> prev, curr;
prev.store(NULL);
curr = head;
bool found = false;
while(!found)
{
auto p = prev.load();
auto c = curr.load();
if(c == NULL)
{
break;
}
if(c->Next == NULL)
{
found=true;
break;
}
prev.store(c);
curr.store(c->Next);
}
if(found)
{
if(prev == NULL)
{
if(Head.compare_exchange_weak(head,NULL))
{
done = true;
}
}
else
{
auto p = prev.load();
auto c = curr.load();
if(p->Next.compare_exchange_weak(c,NULL))
{
done = true;
}
}
if(done)
{
returnValue = curr.load()->Data;
Cache.Free(curr);
}
}
}
}
return returnValue;
}
private:
std::atomic<Node<T> *> Head;
LockFreeMemCache<Node<T> > Cache;
};









share|improve this question









New contributor




roscoe_casita is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











put on hold as off-topic by πάντα ῥεῖ, 200_success, Sᴀᴍ Onᴇᴌᴀ, Emily L., Dannnno 7 hours ago


This question appears to be off-topic. The users who voted to close gave this specific reason:


  • "Code not implemented or not working as intended: Code Review is a community where programmers peer-review your working code to address issues such as security, maintainability, performance, and scalability. We require that the code be working correctly, to the best of the author's knowledge, before proceeding with a review." – πάντα ῥεῖ, 200_success, Sᴀᴍ Onᴇᴌᴀ, Emily L., Dannnno

If this question can be reworded to fit the rules in the help center, please edit the question.









  • 4




    This question would benefit from a language tag. Also is it broken? You mention losing messages. That sounds broken to me.
    – bruglesco
    12 hours ago










  • This is unfortunately offtopic. On code review only working code is reviewed.
    – Sandro4912
    9 hours ago










  • I've updated the code to a version that should be working, both the stack and queue. I want to validate correctness above and beyond unit testing.
    – roscoe_casita
    9 hours ago















up vote
-2
down vote

favorite












Update: The dequeue method was reading from Head multiple times.
Update: Dequeue curr pointer can become null when looking for the last node.
Update: I've switched to using C++ std 11 atomics and so far it seems to be working correctly with many producers/consumers.



I am looking for help figuring out the bugs in some lightweight Multiple Producer / Multiple Consumer data structures: Stack, Queue. These are Lock-Free, they only use CAS/__sync_bool_compare_and_swap/compare_exchange_weak for maintaining consistency.



The Stack I'm pretty sure is correct. Push and Pop both seek to modify the same head pointer, and only 1 gets to do so at a time.



The Queue I know something is<s> was not 100% correct. Enqueue and Dequeue appear to work fine. When the throughput goes up, there are times that messages are lost. The Enqueue is the same operation as Push. For Dequeue, the entire queue is walked from head to tail and the last item trimmed off.



I know this means the performance is O(N) to Dequeue, but thats okay as a tradeoff for lock-free MPMC.



I want to validate the correctness of the code that it truly is MPMC safe without using locks.



template<class T>
class LockFreeStack
{
public:
LockFreeStack()
{
Head = NULL;
}
~LockFreeStack(){
}
void Push(T *item)
{
Node<T> * new_node = Cache.Malloc();
new_node->Data=item;
bool done = false;
while(!done)
{
auto head = Head.load();
new_node->Next.store( head);
if( Head.compare_exchange_weak(head,new_node))
{
done = true;
}
}
}

T *Pop()
{
T *returnValue = NULL;
bool done = false;
while(!done)
{
auto head = Head.load();
if(head == NULL)
{
done=true;
}
else
{
Node<T> * curr = head;
if(Head.compare_exchange_weak(head,curr->Next))
{
done = true;
returnValue = curr->Data;
curr->Next =NULL;
Cache.Free(curr);
}

}
}
return returnValue;
}
public:
std::atomic<Node<T> *> Head;

private:
LockFreeMemCache<Node<T> > Cache;
};


The lock free queue was inspired by the original lock free authors. The queue does not maintain a Tail pointer as that would require two simultaneous CAS operations to update the last nodes next pointer and the tail.



template<class T>
class LockFreeQueue
{
public:
LockFreeQueue()
{
Head=NULL;
}
~LockFreeQueue(){
}
void Enqueue(T *item)
{
Node<T> * new_node = Cache.Malloc();
new_node->Data=item;

bool done = false;
while(!done)
{
auto head = Head.load();
new_node->Next = head;
if(Head.compare_exchange_weak(head,new_node))
{
done = true;
}
}
}
T *Dequeue()
{
T *returnValue=NULL;
bool done = false;
while(!done)
{
auto head = Head.load();
if(head == NULL)
{
done = true;
}
else
{
std::atomic<Node<T> *> prev, curr;
prev.store(NULL);
curr = head;
bool found = false;
while(!found)
{
auto p = prev.load();
auto c = curr.load();
if(c == NULL)
{
break;
}
if(c->Next == NULL)
{
found=true;
break;
}
prev.store(c);
curr.store(c->Next);
}
if(found)
{
if(prev == NULL)
{
if(Head.compare_exchange_weak(head,NULL))
{
done = true;
}
}
else
{
auto p = prev.load();
auto c = curr.load();
if(p->Next.compare_exchange_weak(c,NULL))
{
done = true;
}
}
if(done)
{
returnValue = curr.load()->Data;
Cache.Free(curr);
}
}
}
}
return returnValue;
}
private:
std::atomic<Node<T> *> Head;
LockFreeMemCache<Node<T> > Cache;
};









share|improve this question









New contributor




roscoe_casita is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











put on hold as off-topic by πάντα ῥεῖ, 200_success, Sᴀᴍ Onᴇᴌᴀ, Emily L., Dannnno 7 hours ago


This question appears to be off-topic. The users who voted to close gave this specific reason:


  • "Code not implemented or not working as intended: Code Review is a community where programmers peer-review your working code to address issues such as security, maintainability, performance, and scalability. We require that the code be working correctly, to the best of the author's knowledge, before proceeding with a review." – πάντα ῥεῖ, 200_success, Sᴀᴍ Onᴇᴌᴀ, Emily L., Dannnno

If this question can be reworded to fit the rules in the help center, please edit the question.









  • 4




    This question would benefit from a language tag. Also is it broken? You mention losing messages. That sounds broken to me.
    – bruglesco
    12 hours ago










  • This is unfortunately offtopic. On code review only working code is reviewed.
    – Sandro4912
    9 hours ago










  • I've updated the code to a version that should be working, both the stack and queue. I want to validate correctness above and beyond unit testing.
    – roscoe_casita
    9 hours ago













up vote
-2
down vote

favorite









up vote
-2
down vote

favorite











Update: The dequeue method was reading from Head multiple times.
Update: Dequeue curr pointer can become null when looking for the last node.
Update: I've switched to using C++ std 11 atomics and so far it seems to be working correctly with many producers/consumers.



I am looking for help figuring out the bugs in some lightweight Multiple Producer / Multiple Consumer data structures: Stack, Queue. These are Lock-Free, they only use CAS/__sync_bool_compare_and_swap/compare_exchange_weak for maintaining consistency.



The Stack I'm pretty sure is correct. Push and Pop both seek to modify the same head pointer, and only 1 gets to do so at a time.



The Queue I know something is<s> was not 100% correct. Enqueue and Dequeue appear to work fine. When the throughput goes up, there are times that messages are lost. The Enqueue is the same operation as Push. For Dequeue, the entire queue is walked from head to tail and the last item trimmed off.



I know this means the performance is O(N) to Dequeue, but thats okay as a tradeoff for lock-free MPMC.



I want to validate the correctness of the code that it truly is MPMC safe without using locks.



template<class T>
class LockFreeStack
{
public:
LockFreeStack()
{
Head = NULL;
}
~LockFreeStack(){
}
void Push(T *item)
{
Node<T> * new_node = Cache.Malloc();
new_node->Data=item;
bool done = false;
while(!done)
{
auto head = Head.load();
new_node->Next.store( head);
if( Head.compare_exchange_weak(head,new_node))
{
done = true;
}
}
}

T *Pop()
{
T *returnValue = NULL;
bool done = false;
while(!done)
{
auto head = Head.load();
if(head == NULL)
{
done=true;
}
else
{
Node<T> * curr = head;
if(Head.compare_exchange_weak(head,curr->Next))
{
done = true;
returnValue = curr->Data;
curr->Next =NULL;
Cache.Free(curr);
}

}
}
return returnValue;
}
public:
std::atomic<Node<T> *> Head;

private:
LockFreeMemCache<Node<T> > Cache;
};


The lock free queue was inspired by the original lock free authors. The queue does not maintain a Tail pointer as that would require two simultaneous CAS operations to update the last nodes next pointer and the tail.



template<class T>
class LockFreeQueue
{
public:
LockFreeQueue()
{
Head=NULL;
}
~LockFreeQueue(){
}
void Enqueue(T *item)
{
Node<T> * new_node = Cache.Malloc();
new_node->Data=item;

bool done = false;
while(!done)
{
auto head = Head.load();
new_node->Next = head;
if(Head.compare_exchange_weak(head,new_node))
{
done = true;
}
}
}
T *Dequeue()
{
T *returnValue=NULL;
bool done = false;
while(!done)
{
auto head = Head.load();
if(head == NULL)
{
done = true;
}
else
{
std::atomic<Node<T> *> prev, curr;
prev.store(NULL);
curr = head;
bool found = false;
while(!found)
{
auto p = prev.load();
auto c = curr.load();
if(c == NULL)
{
break;
}
if(c->Next == NULL)
{
found=true;
break;
}
prev.store(c);
curr.store(c->Next);
}
if(found)
{
if(prev == NULL)
{
if(Head.compare_exchange_weak(head,NULL))
{
done = true;
}
}
else
{
auto p = prev.load();
auto c = curr.load();
if(p->Next.compare_exchange_weak(c,NULL))
{
done = true;
}
}
if(done)
{
returnValue = curr.load()->Data;
Cache.Free(curr);
}
}
}
}
return returnValue;
}
private:
std::atomic<Node<T> *> Head;
LockFreeMemCache<Node<T> > Cache;
};









share|improve this question









New contributor




roscoe_casita is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











Update: The dequeue method was reading from Head multiple times.
Update: Dequeue curr pointer can become null when looking for the last node.
Update: I've switched to using C++ std 11 atomics and so far it seems to be working correctly with many producers/consumers.



I am looking for help figuring out the bugs in some lightweight Multiple Producer / Multiple Consumer data structures: Stack, Queue. These are Lock-Free, they only use CAS/__sync_bool_compare_and_swap/compare_exchange_weak for maintaining consistency.



The Stack I'm pretty sure is correct. Push and Pop both seek to modify the same head pointer, and only 1 gets to do so at a time.



The Queue I know something is<s> was not 100% correct. Enqueue and Dequeue appear to work fine. When the throughput goes up, there are times that messages are lost. The Enqueue is the same operation as Push. For Dequeue, the entire queue is walked from head to tail and the last item trimmed off.



I know this means the performance is O(N) to Dequeue, but thats okay as a tradeoff for lock-free MPMC.



I want to validate the correctness of the code that it truly is MPMC safe without using locks.



template<class T>
class LockFreeStack
{
public:
LockFreeStack()
{
Head = NULL;
}
~LockFreeStack(){
}
void Push(T *item)
{
Node<T> * new_node = Cache.Malloc();
new_node->Data=item;
bool done = false;
while(!done)
{
auto head = Head.load();
new_node->Next.store( head);
if( Head.compare_exchange_weak(head,new_node))
{
done = true;
}
}
}

T *Pop()
{
T *returnValue = NULL;
bool done = false;
while(!done)
{
auto head = Head.load();
if(head == NULL)
{
done=true;
}
else
{
Node<T> * curr = head;
if(Head.compare_exchange_weak(head,curr->Next))
{
done = true;
returnValue = curr->Data;
curr->Next =NULL;
Cache.Free(curr);
}

}
}
return returnValue;
}
public:
std::atomic<Node<T> *> Head;

private:
LockFreeMemCache<Node<T> > Cache;
};


The lock free queue was inspired by the original lock free authors. The queue does not maintain a Tail pointer as that would require two simultaneous CAS operations to update the last nodes next pointer and the tail.



template<class T>
class LockFreeQueue
{
public:
LockFreeQueue()
{
Head=NULL;
}
~LockFreeQueue(){
}
void Enqueue(T *item)
{
Node<T> * new_node = Cache.Malloc();
new_node->Data=item;

bool done = false;
while(!done)
{
auto head = Head.load();
new_node->Next = head;
if(Head.compare_exchange_weak(head,new_node))
{
done = true;
}
}
}
T *Dequeue()
{
T *returnValue=NULL;
bool done = false;
while(!done)
{
auto head = Head.load();
if(head == NULL)
{
done = true;
}
else
{
std::atomic<Node<T> *> prev, curr;
prev.store(NULL);
curr = head;
bool found = false;
while(!found)
{
auto p = prev.load();
auto c = curr.load();
if(c == NULL)
{
break;
}
if(c->Next == NULL)
{
found=true;
break;
}
prev.store(c);
curr.store(c->Next);
}
if(found)
{
if(prev == NULL)
{
if(Head.compare_exchange_weak(head,NULL))
{
done = true;
}
}
else
{
auto p = prev.load();
auto c = curr.load();
if(p->Next.compare_exchange_weak(c,NULL))
{
done = true;
}
}
if(done)
{
returnValue = curr.load()->Data;
Cache.Free(curr);
}
}
}
}
return returnValue;
}
private:
std::atomic<Node<T> *> Head;
LockFreeMemCache<Node<T> > Cache;
};






c++ stack queue lock-free producer-consumer






share|improve this question









New contributor




roscoe_casita is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











share|improve this question









New contributor




roscoe_casita is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









share|improve this question




share|improve this question








edited 9 hours ago





















New contributor




roscoe_casita is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









asked 13 hours ago









roscoe_casita

971




971




New contributor




roscoe_casita is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.





New contributor





roscoe_casita is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






roscoe_casita is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.




put on hold as off-topic by πάντα ῥεῖ, 200_success, Sᴀᴍ Onᴇᴌᴀ, Emily L., Dannnno 7 hours ago


This question appears to be off-topic. The users who voted to close gave this specific reason:


  • "Code not implemented or not working as intended: Code Review is a community where programmers peer-review your working code to address issues such as security, maintainability, performance, and scalability. We require that the code be working correctly, to the best of the author's knowledge, before proceeding with a review." – πάντα ῥεῖ, 200_success, Sᴀᴍ Onᴇᴌᴀ, Emily L., Dannnno

If this question can be reworded to fit the rules in the help center, please edit the question.




put on hold as off-topic by πάντα ῥεῖ, 200_success, Sᴀᴍ Onᴇᴌᴀ, Emily L., Dannnno 7 hours ago


This question appears to be off-topic. The users who voted to close gave this specific reason:


  • "Code not implemented or not working as intended: Code Review is a community where programmers peer-review your working code to address issues such as security, maintainability, performance, and scalability. We require that the code be working correctly, to the best of the author's knowledge, before proceeding with a review." – πάντα ῥεῖ, 200_success, Sᴀᴍ Onᴇᴌᴀ, Emily L., Dannnno

If this question can be reworded to fit the rules in the help center, please edit the question.








  • 4




    This question would benefit from a language tag. Also is it broken? You mention losing messages. That sounds broken to me.
    – bruglesco
    12 hours ago










  • This is unfortunately offtopic. On code review only working code is reviewed.
    – Sandro4912
    9 hours ago










  • I've updated the code to a version that should be working, both the stack and queue. I want to validate correctness above and beyond unit testing.
    – roscoe_casita
    9 hours ago














  • 4




    This question would benefit from a language tag. Also is it broken? You mention losing messages. That sounds broken to me.
    – bruglesco
    12 hours ago










  • This is unfortunately offtopic. On code review only working code is reviewed.
    – Sandro4912
    9 hours ago










  • I've updated the code to a version that should be working, both the stack and queue. I want to validate correctness above and beyond unit testing.
    – roscoe_casita
    9 hours ago








4




4




This question would benefit from a language tag. Also is it broken? You mention losing messages. That sounds broken to me.
– bruglesco
12 hours ago




This question would benefit from a language tag. Also is it broken? You mention losing messages. That sounds broken to me.
– bruglesco
12 hours ago












This is unfortunately offtopic. On code review only working code is reviewed.
– Sandro4912
9 hours ago




This is unfortunately offtopic. On code review only working code is reviewed.
– Sandro4912
9 hours ago












I've updated the code to a version that should be working, both the stack and queue. I want to validate correctness above and beyond unit testing.
– roscoe_casita
9 hours ago




I've updated the code to a version that should be working, both the stack and queue. I want to validate correctness above and beyond unit testing.
– roscoe_casita
9 hours ago















active

oldest

votes






















active

oldest

votes













active

oldest

votes









active

oldest

votes






active

oldest

votes

Popular posts from this blog

Ellipse (mathématiques)

Quarter-circle Tiles

Mont Emei