Skip to content

Instantly share code, notes, and snippets.

@benjchristensen
Last active January 15, 2018 07:24
Show Gist options
  • Star 13 You must be signed in to star a gist
  • Fork 2 You must be signed in to fork a gist
  • Save benjchristensen/9016e0565c3c3651bcb2 to your computer and use it in GitHub Desktop.
Save benjchristensen/9016e0565c3c3651bcb2 to your computer and use it in GitHub Desktop.
Regarding synchronous RESTful communication ...

Response to https://twitter.com/jeffreymaxwell/status/705760483391963136 requiring more than the 77 characters left on Twitter.

DISCLAIMER: The quality of writing and thinking here is aligned with a Twitter conversation, not a blog post, presentation, or book ;-)

Synchronous RESTful communication between Microservices is an anti-pattern ... you seem to being saying that the Netflix architecture (hystrix, eureka, ribbon, ..) is broken ... hmm what would @benjchristensen say?

The REST part of this doesn't concern me, that is just one semantic approach to communicating, typically in a request/response manner. It can be done synchronously or asynchronously.

For a legit response to this, I need "synchronous" defined and given context. If it refers specifically to network protocols, then absolutely the "synchronous" aspect of this discussion is a problem.

HTTP/1.1 is a synchronous protocol which means a single request/response per connection. This is a significant ineffeciency. However, HTTP/2 has addressed this so it now uses multiplexed streams and "message passing" semantics to achieve request/response. Other request/response (generally called either RPC or REST these days) solutions like Thrift, Finagle, and gRPC all use async network protocols for request/response. These are all fine.

So, don't use HTTP/1, use HTTP/2 or some other network protocol that supports interleaving, multiplexing, message passing.

The next part of "synchronous" is the threading model and APIs used to access the network. If "blocking IO" (BIO) is used with request-per-thread, then this is also an efficiency and scaling problem with most systems as they exist in 2016 (such as on Linux with Java, C++, etc).

This PDF shows a fairly thorough study I was part of that compares thread-per-request with event loops: https://github.com/Netflix-Skunkworks/WSPerfLab/blob/master/test-results/RxNetty_vs_Tomcat_April2015.pdf I presented the results here: https://speakerdeck.com/benjchristensen/applying-reactive-programming-with-rxjava-at-goto-chicago-2015?slide=161

So, use non-blocking IO (NIO) with event loops (at least with Linux as it stands right now).

On top of that is all the opinions about programming models that I won't get into as that starts to diverge and become far more opinionated. The only requirement is that the programming model does not synchronously block the thread while waiting on IO, otherwise the benefits of the async, non-blocking network communication are severely diminished.

As for whether "the Netflix architecture (hystrix, eureka, ribbon, ..) is broken". Hystrix had to use an inefficient sledgehammer with thread-isolation because Ribbon uses synchronous HTTP/1 and blocking IO. If it used async HTTP/2 and non-blocking IO with callbacks (and whatever async programming model on top is desired), it would be able to efficiently use async timeouts without thread isolation (as HystrixObservableCommand does).

Another sad example I am aware of is taking a memcache client which is efficiently using message passing and non-blocking IO under the covers and hiding it under a synchronous blocking API so that consumers MUST block a thread-per-request. This is an example of how all the layers matter, up through the programming model.

So yes, I am not a fan of how Ribbon (with HTTP/1 and blocking IO and APIs) works and exerted a lot of effort to deal with it (via Hystrix) and to evolve away from it (RxNetty, ReactiveSocket, HTTP/2, etc).

The other topic in the thread is whether point-to-point is okay. I'm absolutely fine with point-to-point and advocate for it as the general default (and all large distributed systems I'm aware of use this approach). A broker such as Kafka is absolutely not needed for "async REST" or "async message passing". Inclusion of a broker such as Kafka is serving a different set of use cases.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment