This post is the part II of a series of practical posts that I am writing to help developers and architects understand and build service-oriented architecture and microservices.
![]()
I wrote other stories in the same context like these are some links:
This article is also part of my book that I am “lean-publishing” called Painless Docker: Unlock The Power Of Docker & Its Ecosystem. Painless Docker is a practical guide to master Docker and its ecosystem based on real world examples.
Leinster Event-Driven Backtesting with Python - Part I - QuantStart I've read it before, a good article! Mike Event driven backtesting in Python or R in Matlab, R project and Python, futures io social day trading. Jul 29, 2005 On the plus side, it's been letting me convert my serial stuff to be fully event-driven and with the resulting much lower latencies and better CPU efficiency, while keeping my code more 'traditional' instead of having to force it entirely into the Twisted point of view. And, obviously, to hear back from those interested.
![]()
In my last post (Benchmarking Amazon SNS & SQS For Inter-Process Communication In A Microservices Architecture), I tested the messaging mechanism using SNS/SQS and even if benchmarks was done from my Laptop (and not EC2 instance), results were good.
The last article was featured on many newsletters, so I decided to continue my tests and publish this post.
Event-driven architecture (EDA) (or message-driven architecture), is a software architecture pattern that promotes the production and the consumption of messages while evoking a specific event/reaction in response to a consumed message.
A classic system architecture will promote reading and reacting to data after saving it to a data store (mysql, postgresql, mongodb ..etc) but this is not really the best thing to do, especially if you are doing real time or near real time processing, unless you want to spend time and many building an instantaneous reactive system, please don't use databases, STREAM DATA INSTEAD.
I created two machines (you can use one for both publisher and subscriber, since it doesn’t change nothing in the networking)
This is the simplified architecture and I was the AB Load Tester. Both machines and services are hosted in the eu-west-1 region.
In order to minimize the transfer time, it is recommended to use the publisher and the consumer machines in the same region.
Load Testing ?
Let’s consider the example of a web server writing access logs to an EC2 disk.
In the first machine, I installed Nginx:
For simplicity sake, I kept the default Nginx page, our test is about networking not a Nginx load test.
From left to right:
I used Apache Benchmarking for load testing my server:
![]()
Once again, my test is primarily about networking and data sent from :
If I wanted to test Nginx I will probably set higher the concurrency level.
This is another useful infomation about the request:
And of course my test:
To run the publisher container I started my container log-publisher:
Same thing for the subscriber:
You may redirect the output to a file, since these two containers are made to be verbose.
Using Python/SNS To Create A Publisher
This is the primary code that I’ve used to publish any file mapped to /logs (from outside the container) to SNS and line by line using tailer lib.
Since Docker support environment variables, I used this feature to make my program use also the same variables that I used in the Docker Run command.
Using Python/SQS To Create A Subscriber
This piece of code uses also boto in order to connect to the right SQS and print the date just after getting the sent message.
I used the same thing like Python/SQS for environment variables in this script.
Benchmarking Results
I used Google Sheets to calculate the difference between the two timestamps :
And this is the chart that show the time between I and J (J = I -HI).
The test lasted 14.823 seconds and during it 1000 requests were sent with a concurrency level of 5 requests. IMHO, these are good results as the highest response time was 0.28 second and the lowest was 0.009 second.
This is the distribution of different response times are below:
This another chart where I put the highest, the lowest and the average transportation time:
That’s all folks, the part III is coming soon. For more updates, follow me using these links ↓
Connect Deeper
Microservices are changing how we make software but one of its drawbacks is the networking part that could be complex sometimes and messaging is impacted directly by the networking problems. Using SNS/SQS and a pub/sub model seems to be a good solution to create an inter-service messaging middleware. The publisher/subscriber scripts that I used are not really optimised for load and speed but they are a good use case.
If you resonated with this article, please join more than 1000 passionate DevOps engineers, Developers and IT experts from all over the world and subscribe to DevOpsLinks.
You can find me on Twitter, Clarity or my website and you can also check my books and trainings : SaltStack For DevOps, Practical AWS & Painless Docker.
If you liked this post, please recommend and share it to your followers.
Don’t forget to check my training Practical AWS
Event-driven programming focuses on events. Eventually, the flow of program depends upon events. Until now, we were dealing with either sequential or parallel execution model but the model having the concept of event-driven programming is called asynchronous model. Event-driven programming depends upon an event loop that is always listening for the new incoming events. The working of event-driven programming is dependent upon events. Once an event loops, then events decide what to execute and in what order. Following flowchart will help you understand how this works −
Python Module – Asyncio
Asyncio module was added in Python 3.4 and it provides infrastructure for writing single-threaded concurrent code using co-routines. Following are the different concepts used by the Asyncio module −
The event loop
Event-loop is a functionality to handle all the events in a computational code. It acts round the way during the execution of whole program and keeps track of the incoming and execution of events. The Asyncio module allows a single event loop per process. Followings are some methods provided by Asyncio module to manage an event loop −
Example
The following example of event loop helps in printing hello world by using the get_event_loop() method. This example is taken from the Python official docs.
OutputFutures
This is compatible with the concurrent.futures.Future class that represents a computation that has not been accomplished. There are following differences between asyncio.futures.Future and concurrent.futures.Future −
Example
The following is an example that will help you understand how to use asyncio.futures.future class.
OutputCoroutines
The concept of coroutines in Asyncio is similar to the concept of standard Thread object under threading module. This is the generalization of the subroutine concept. A coroutine can be suspended during the execution so that it waits for the external processing and returns from the point at which it had stopped when the external processing was done. The following two ways help us in implementing coroutines −
async def function()
This is a method for implementation of coroutines under Asyncio module. Following is a Python script for the same −
Output@asyncio.coroutine decorator
Another method for implementation of coroutines is to utilize generators with the @asyncio.coroutine decorator. Following is a Python script for the same −
OutputTasks
This subclass of Asyncio module is responsible for execution of coroutines within an event loop in parallel manner. Following Python script is an example of processing some tasks in parallel.
OutputTransports
Asyncio module provides transport classes for implementing various types of communication. These classes are not thread safe and always paired with a protocol instance after establishment of communication channel.
Following are distinct types of transports inherited from the BaseTransport −
Followings are five distinct methods of BaseTransport class that are subsequently transient across the four transport types −
Protocols
Asyncio module provides base classes that you can subclass to implement your network protocols. Those classes are used in conjunction with transports; the protocol parses incoming data and asks for the writing of outgoing data, while the transport is responsible for the actual I/O and buffering. Following are three classes of Protocol −
![]() Comments are closed.
|
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. Archives
December 2022
Categories |