Non-blocking Socket
The problem
Importance of Java as a language and runtime platform for server application grows with every day. A fundamental trait of a server application is that it services multiple clients concurrently. The only way to achieve concurrency before JDK 1.4 was to allocate a separate thread of execution for servicing every connection. While this model is quite easy to implement, it contains inherent scalability problems. First, a server that maps one connection to one thread can not serve more concurrent connections than it can allocate threads. Second, while threads - from the developer's standpoint - provide a convenient virtualization of available CPU resources, they are costly both in terms of space (each thread requires a separate call stack) and time (context switching a CPU between threads consumes time). All these factors impose limits on both the number of connections the server can process at any given time, as well as on the effective throughput. Last, but not least the threads will potentially spend a significant part of their processing time blocked in I/O operations. This makes the server vulnerable to a particular kind of denial-of-service attacks, where a malicious client can bog down the server by opening many connections to it (therefore allocating all available threads), and then send it a completely valid request extremely slowly (say, one byte per minute). This effect can be caused even with otherwise benevolent clients sitting behind narrow bandwidth connections.
The solution
If you are concerned with any of these problems (support for extremely large number of connections, throughput maximization, and protection from service degradation due to slow clients), you should write your servers in a non-blocking fashion. This means a radical paradigm shift - instead of allocating a dedicated thread to serve a connection, a single thread (or eventually several threads) service a large number of connections using an event-driven mechanism. In the event-driven architecture, one thread watches multiple network sockets, and when one or more sockets are ready to be read from or written to, the thread recieves an event and gets the chance to service those connections that became ready. However, this architecture assumes the availability of a non-blocking I/O library, since a crucial requirement is that the thread must never block on an I/O operation.
Also, the way network protocol is implemented in a non-blocking world is drastically different than in the blocking world. With the blocking, one-thread-per connection paradigm, you encapsulate a network protocol in a procedural way, that is you write a code that is executed on a thread that is dedicated to executing the code for its single connection. You can store much of the processing state on the stack: local variables, call arguments, and the code execution path itself. On the contrary, in the non-blocking world, your code is invoked to process one or two packets of data at a time and then it returns control. Therefore, you can't contain any state on the stack, as you would Essentially, you must write a finite-state machine where events (incoming packet and empty outgoing network buffer) drive state transitions.
No comments:
Post a Comment