Quote:
Originally Posted by Dkt01
One possible issue I see is that you never seem to lock data during reads/writes. If you're managing (potentially) high-volume accesses of data, asynchronous reads and writes of un-locked data could lead to race conditions and/or data corruption. I haven't closely inspected your code, but it appears you are using threading for save_file_fs_async(). Consider the case where that data is being read during an asynchronous save. Implementing even a simple lock system would prevent undesired behavior.
Unrelated: some commenting of your code would go a long way. In its current state, your code makes it difficult to quickly understand what is going on. Consider commenting a description of what each function does and what its parameters are.
Good work so far!
|
It ends up that that is exactly the problem I am facing!
It is not that RAMCache (the function that you pointed out). I actually only do filesystem reads on server launch. there are no writes, whatsoever.
I am running this server on my Raspberry Pi (and the latency is actually quite low). Whenever I use ab and send 1024 requests to the server with a concurrency of 32 or higher, I have a random chance of experiencing a segmentation fault.
I am really confused about this, however. I use an automatic mutex to ensure my mutex is unlocked when it goes out of scope. However, I still continue to get these faults and since I am multithreading, they are nearly impossible to track.
The entire server is multithreaded. Every request == one new thread on the system. Those threads only last a couple of miliseconds, however. The server gives around 15ms responses on average running on a Raspberry Pi!
I have a feeling that my Mutexes are not locking properly. All this magic happens in server.hpp, where there is a web server class with one function!
Could you please take a look at what could be going wrong. Not that I am thinking about it, I actually have a slight feeling that when I bombard the pi with requests, those threads are spawned at nearly the same time, so there are multiple threads locking the data!
However, there's another thing that concerns me:
It crashes when Google Chrome sends it's idle requests to the server. Every one second, I send a heartbeat request. I also start running my table monitor. After a while (not always a while, sometimes a minute or two), the server gives me a segfault or something nastier.
NOTE:
I just did a test where I put the server under extreme load (256 request concurrency). Every once in a while, the server showed garbled text, meaning that the mutexes aren't properly working.
Could you please take a look at my Mutex implementation?
GitHub Address:
Here
Also, about commenting:
I am quite wierd about commenting, myself

. Sometimes, I use
too many comments. Sometimes, I barely use any at all!
I'm considering an upcoming server rewrite. It's funny how the server took me an hour or two to make, while the interface took like a week. i'll be keeping the web interface the same!
This might allow me to completely figure out all these problems
I will just use C++11/14 because it will offer many performance improvements and built in threading/mutex mechanics! The HashMap will offer a better performance than std::map! My goal is to have this fully built in Posix STL C++!
pthread will be one of the only notable exception.
Does anyone know how to fix the problem in my screenshot? It is really confusing me. I don't really use pointers much of anywhere. It can be DLib, but everything was working fine not too long ago.