Welcome to ShenZhenJia Knowledge Sharing Community for programmer and developer-Open, Learning and Share
menu search
person
Welcome To Ask or Share your Answers For Others

Categories

Our analytic server is written in c++. It basically queries underlying storage engine and returns a fairly big structured data via thrift. A typical requests will take about 0.05 to 0.6 seconds to finish depends on the request size.

I noticed that there are a few options in terms of which Thrift server we can use in the c++ code, specifically TNonblockingServer, TThreadedServer, and TThreadPoolServer. It seems like TNonblockingServer is the way to go since it can support much more concurrent requests and still using a thread pool behind the scene to crunch through the tasks. It also avoids the cost of constructing/destructing the threads.

Facebook's update on thrift: http://www.facebook.com/note.php?note_id=16787213919

Here at Facebook, we're working on a fully asynchronous client and server for C++. This server uses event-driven I/O like the current TNonblockingServer, but its interface to the application code is all based on asynchronous callbacks. This will allow us to write servers that can service thousands of simultaneous requests (each of which requires making calls to other Thrift or Memcache servers) with only a few threads.

Related posts on stackover: Large number of simulteneous connections in thrift

That being said, you won't necessarily be able to actually do work faster (handlers still execute in a thread pool), but more clients will be able to connect to you at once.

Just wondering are there any other factors I'm missing here? How shall I decide which one fits my needs the best?

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
thumb_up_alt 0 like thumb_down_alt 0 dislike
150 views
Welcome To Ask or Share your Answers For Others

1 Answer

Requests that take 50-600 milliseconds to complete are pretty long. The time it takes to create or destroy a thread is much less than that, so don't let that factor into your decision at this time. I would choose the one that is easiest to support and that is the least error-prone. You want to minimize the likelihood of subtle concurrency bugs.

This is why it is often easier to write single-threaded transaction handling code that blocks where it needs to, and have many of these running in parallel, than to have a more complex non-blocking model. A blocked thread may slow down an individual transaction, but it does not prevent the server from doing other work while it waits.

If your transaction load increases (i.e. more client transactions) or the requests become faster to process (approaching 1 millisecond per transaction), then transaction overhead becomes more of a factor. The metric to pay attention to is throughput: how many transactions complete per unit time. The absolute duration of a single transaction is less important than the rate at which they are being completed, at least if it stays well below one second.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
thumb_up_alt 0 like thumb_down_alt 0 dislike
Welcome to ShenZhenJia Knowledge Sharing Community for programmer and developer-Open, Learning and Share
...