Introduction

Has your users complained about the loading issue on the web app you developed. That might be because of some long I/O bound call or a time consuming process. For example, when a customer signs up to website and we need to send confirmation email which in normal case the email will be sent and then reply 200 OK response is sent on signup POST. However we can send email later, after sending 200 OK response, right?. This is not so straight forward when you are working with a framework like Django, which is tightly binded to MVC paradigm.

So, how do we do it ? The very first thought in mind would be python threading module. Well, Python threads are implemented as pthreads (kernel threads), and because of the global interpreter lock (GIL), a Python process only runs one thread at a time. And again threads are hard to manage, maintain code and scale it.

Perequisite

Audience for this blog requires to have knowledge about Django and AWS elastic beanstalk.

Celery

Celery is here to rescue. It can help when you have a time consuming task (heavy compute or I/O bound tasks) between request-response cycle. Celery is an open source asynchronous task queue or job queue which is based on distributed message passing. In this post I will walk you through the celery setup procedure with django and SQS on elastic beanstalk.

Why Celery ?

Celery is very easy to integrate with existing code base. Just write a decorator above the definition of a function declaring a celery task and call that function with a .delay method of that function.

define celery task

Python

1

2

3

4

5

6

7

fromcelery importCelery

app=Celery('hello',broker='amqp://guest@localhost//')

@app.task

defhello():

return'hello world'

call celery task in worker process

Python

1

2

# Calling a celery task

hello.delay()

Broker

To work with celery, we need a message broker. As of writing this blog, Celery supports RabbitMQ, Redis, and Amazon SQS (not fully) as message broker solutions. Unless you don’t want to stick to AWS ecosystem (as in my case), I recommend to go with RabbitMQ or Redis because SQS does not yet support remote control commands and events. For more info check here. One of the reason to use SQS is its pricing. One million SQS free request per month for every user.

Proceeding with SQS, go to AWS SQS dashboard and create a new SQS queues. Click on create new queue button.

Depending upon the requirement we can select any type of the queue. We will name queue as dev-celery.

Installation

Celery has a very nice documentation. Installation and configuration is described here. For convenience here are the steps

Activate your virtual environment, if you have configured one and install cerely.

pip install celery[sqs]

Configuration

Celery has built-in support of django. It will pick its setting parameter from django’s settings.py which are prepended by CELERY_ (‘CELERY’ word needs to be defined while initializing celery app as namespace). So put below setting parameter in settings.py

File: proj/proj/celery.py

Python

1

2

# Amazon credentials will be taken from environment variable.

CELERY_BROKER_URL='sqs://'

AWS login credentials should be present in the environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY

All the task which are registered to use celery using celery decorators appear here while starting celery. If you find that your task does not appear here then make sure that the module containing the task is imported on startup.

Now open django shell in another terminal

Terminal 2

Terminal 2

Python

1

2

3

4

5

$python manage.py shell

In[1]:fromproj importcelery

In[2]:celery.debug_task()# ←← ← Not through celery

In[3]:celery.debug_task.delay()# ←← ← This is through celery

After executing the task function with delay method, that task should run in the worker process which is listening to events in other terminal. Here celery sent a message to SQS with details of the task and worker process which was listening to SQS, received it and task was executed in worker process. Below is what you should see in terminal 1

AWS elastic beanstalk already use supervisord for managing web server process. Celery can also be configured using supervisord tool. Celery’s official documentation has a nice example of supervisord config for celery. https://github.com/celery/celery/tree/master/extra/supervisord. Based on that we write quite a few commands under .ebextensions directory.

Create two files under .ebextensions directory. Celery.sh file extract the environment variable and forms celery configuration, which copied to /opt/python/etc/celery.conf file and supervisord is restarted. Here main celery command:

What is FCM ?

FCM – Firebase Cloud Messaging is a cross-platform ( Android, iOS and Chrome ) messaging solution that lets you reliably deliver messages at no cost. FCM is best suited if you want to send push notification to your app which you built to run on Android and iOS. The advantage you get is you don’t have to separately deal with GCM (Google Cloud Messaging deprecated now) and Apple’s APNS. You hand over your notification message to FCM and FCM takes care of communicating with apple’s APNS and Android messaging servers to reliably deliver those messages.

Using FCM we can send message to single device or multiple devices. There are two different types of messages, notification and data. Notification messages include JSON keys that are understood and interpreted by phone’s operating system. If you want to include customized app specific JSON keys use data message. You can combine both notification and data JSON objects in single message. You can also send messages with different priority.

Note : – You need to set
priority to
high if you want phone to wake up and show notification on screen

Sending message with Python

We can use PyFCM to send messages via FCM. PyFCM is good for synchronous ( blocking ) python. We will discuss non-blocking option in next paragraph.

ELK stack is also known as the Elastic stack, consists of Elasticsearch, Logstash, and Kibana. It helps you to have all of your logs stored in one place and analyze the issues by correlating the events at a particular time.

Sentinl – Sentinl extends Siren Investigate and Kibana with Alerting and Reporting functionality to monitor, notify and report on data series changes using standard queries, programmable validators and a variety of configurable actions – Think of it as a free an independent “Watcher” which also has scheduled “Reporting” capabilities (PNG/PDFs snapshots).

SENTINL is also designed to simplify the process of creating and managing alerts and reports in Siren Investigate/Kibana 6.xvia its native App Interface, or by using native watcher tools in Kibana 6.x+.

Environment

To have a full-featured ELK stack, we would need two machines to test the collection of logs.

ELK Stack

1

2

3

Operating system:CentOS7Minimal

IP Address:192.168.1.10

HostName:server.lintel.local

Filebeat

1

2

3

Operating System:CentOS7Minimal

IP Address:192.168.1.20

HostName:client.lintel.local

Prerequisites

Install Java

Since Elasticsearch is based on Java, make sure you have either OpenJDK or Oracle JDK is installed on your machine.

Here, I am using OpenJDK 1.8.

1

yum-yinstall java-1.8.0wget

Verify the Java version.

1

java-version

Output:

1

2

3

java version"1.8.0_181"

Java(TM)SE Runtime Environment(build1.8.0_181-b13)

Java HotSpot(TM)64-Bit Server VM(build25.181-b13,mixed mode)

Configure ELK repository

Import the Elastic signing key.

1

rpm--import https://artifacts.elastic.co/GPG-KEY-elasticsearch

Setup the Elasticsearch repository and install it.

1

vi/etc/yum.repos.d/elk.repo

Add the below content to the elk.repo file.

1

2

3

4

5

6

7

8

[elasticsearch-6.x]

name=Elasticsearch repository for6.xpackages

baseurl=https://artifacts.elastic.co/packages/6.x/yum

gpgcheck=1

gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch

enabled=1

autorefresh=1

type=rpm-md

Install Elasticsearch

Elasticsearch is an open source search engine, offers a real-time distributed search and analytics with the RESTful web interface. Elasticsearch stores all the data are sent by the Logstash and displays through the web interface (Kibana) on users request.

Install Elasticsearch.

1

yum install-yelasticsearch

Configure Elasticsearch to start during system startup.

1

2

3

systemctl daemon-reload

systemctl enable elasticsearch

systemctl start elasticsearch

Use CURL to check whether the Elasticsearch is responding to the queries or not.

1

curl-XGET http://localhost:9200

Output:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

{

"name":"1DwGO86",

"cluster_name":"elasticsearch",

"cluster_uuid":"xboS_6K5Q2OO1XA-QJ9GIQ",

"version":{

"number":"6.4.0",

"build_flavor":"default",

"build_type":"rpm",

"build_hash":"595516e",

"build_date":"2018-08-17T23:18:47.308994Z",

"build_snapshot":false,

"lucene_version":"7.4.0",

"minimum_wire_compatibility_version":"5.6.0",

"minimum_index_compatibility_version":"5.0.0"

},

"tagline":"You Know, for Search"

}

Install Logstash

Logstash is an open source tool for managing events and logs, it collects the logs, parse them and store them on Elasticsearch for searching. Over 160+ plugins are available for Logstash which provides the capability of processing the different type of events with no extra work.

Install the Logstash package.

1

yum-yinstall logstash

Create SSL certificate (Optional)

Filebeat (Logstash Forwarder) are normally installed on client servers, and they use SSL certificate to validate the identity of Logstash server for secure communication.

Create SSL certificate either with the hostname or IP SAN.

(Hostname FQDN)

If you use the Logstash server hostname in the beats (forwarder) configuration, make sure you have A record for Logstash server and also ensure that client machine can resolve the hostname of the Logstash server.

Go to the OpenSSL directory.

1

cd/etc/pki/tls/

Now, create the SSL certificate. Replace green one with the hostname of your real Logstash server.

Configure Logstash

Logstash configuration can be found in /etc/logstash/conf.d/. Logstash configuration file consists of three sections input, filter, and the output. All three sections can be found either in a single file or separate files end with .conf.

I recommend you to use a single file for placing input, filter and output sections.

1

vi/etc/logstash/conf.d/logstash.conf

In the first section, we will put an entry for input configuration. The following configuration sets Logstash to listen on port 5044 for incoming logs from the beats (forwarder) that sit on client machines.

In the filter section. We will use Grok to parse the logs ahead of sending it to Elasticsearch. The following grok filter will look for the syslog labeled logs and tries to parse them to make a structured index.

Filebeat configuration file is in YAML format, which means indentation is very important. Make sure you use the same number of spaces used in the guide.

Open up the filebeat configuration file.

1

vi/etc/filebeat/filebeat.yml

On top, you would see the prospectors section. Here, you need to specify which logs should be sent to Logstash and how they should be handled. Each prospector starts with – character.

For testing purpose, we will configure filebeat to send /var/log/messages to Logstash server. To do that, modify the existing prospector under paths section.

Comment out the – /var/log/*.log to avoid sending all .log files present in that directory to Logstash.

1

2

3

4

5

6

7

8

9

10

11

12

13

filebeat.inputs:

-type:<strong>log</strong>

<strong># Change to true to enable this input configuration.</strong>

enabled:<strong>true</strong>

<strong># Paths that should be crawled and fetched.</strong>

paths:

<strong>-/var/log/messages

</strong># - /var/log/*.log

...

Comment out the section output.elasticsearch: as we are not going to store logs directly to Elasticsearch.

Now, find the line output.logstash and modify the entries like below. This section defines filebeat to send logs to Logstash server server.lintel.local on port 5044 and mention the path where the copied SSL certificate is placed

Replace server.lintel.local with IP address in case if you are using IP SAN.

What is redis

Redis is an open source in-memory database. It stores data in key-value format. Because of residing in memory, redis is an excellent tool for caching. Redis provides a rich set of data types. This gives redis upper hand over Memcached. Apart from caching, redis can be used as distributed message broker.

Redis Cluster and Sentinels

To achieve high availability, redis can be deployed in cluster along with Sentinels. Sentinel is a feature of redis. Multiple sentinels are deployed across redis clusters for monitoring purpose. When redis master goes down, sentinels elect a new master from slaves. When old master comes up again, it is added as slave.

Another use case of clustering is a distribution of load. In high load environment, we can send write requests to master and read request to slaves.

This tutorial is specifically focused on Redis Cluster Master Slave model. We will not cover data sharding across cluster here. In data sharding, keys are distributed across multiple redis nodes.

Setup for tutorial

For this tutorial, we will use 3 (virtual) servers. On one server Redis master will reside while other two servers will be used for slaves. Standard redis port is 6379. To differentiate easily, we will run master on 6379 port and slaves on
6380 and 6381 ports. Same will be applied for sentinel services. Master sentinel will listen on 16379 port while slave sentinels will be on 16380 and 16381.

Notes

Security

NEVER EVER run redis on public interface

If redis is deployed in cloud environment like AWS, set up security groups/firewalls carefully. Most of times, cloud providers use ephemeral ips. Because of ephermal ips, even redis is bound to private ip, it can be accessed over public interface.

For more security, dangerous commands can be disabled(renamed). But be careful with them in cluster environment.

Redis also provides simple authentication mechanism. It is not covered here because of scope.

Sentinel management

During redis fail-over, config files are rewritten by sentinel program. So when restarting redis-cluster, be careful.

What is DBF files ?

A DBF file is a standard database file used by dBASE, a database management system application. It organises data into multiple records with fields stored in an array data type. DBF files are also compatible with other “xBase” database programs, which became an important feature because of the file format’s popularity.

Tools which can read or open DBF files

Below are list of program which can read and open dbf file.

Windows

dBase

Microsoft Access

Microsoft Excel

Visual Foxpro

Apache OpenOffice

dbfview

dbf Viewer Plus

Linux

Apache OpenOffice

GTK DBF Editor

How to read file in linux ?

“dbview” command available in linux, which can read dbf files.

Below code snippet show how to use dbview command.

Shell

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

[lalit:temp]₹dbview test.dbf

Name:John

Surname:Miller

Initials:JM

Birthdate:19800102

Name:Andy

Surname:Larkin

Initials:AL

Birthdate:19810203

Name:Bill

Surname:Clinth

Initials:

Birthdate:19820304

Name:Bobb

Surname:McNail

Initials:

Birthdate:19830405

[lalit:temp]₹

How to read it using python ?

“dbfread” is the library available in python to read dbf files. This library reads DBF files and returns the data as native Python data types for further processing.

dbfread requires python 3.2 or 2.7. dbfread is a pure python module, so doesn’t depend on any packages outside the standard library.

Every one gets tons of email these days. This includes emails about super duper offers from amazon to princess and wealthy businessmen trying to offer their money to you from some African country that you have never heard of. In all these emails in your inbox there lies one or two valuable emails either from your friends, bank alerts, work related stuff. Spam is a problem that email service providers are battling for ages. There are a few opensource spam fighting tools available like SpamAssasin or SpamBayes.

What is milter ?

Simply put – milter is mail filtering technology. Its designed by sendmail project. Now available in other MTAs also. People historically used all kinds of solutions for filtering mails on servers using procmail or MTA specific methods. The current scene seems to be moving forward to sieve. But there is a huge difference between milter and sieve. Sieve comes in to picture when mail is already accepted by MTA and had been handed over to MDA. On the other hand milter springs into action in the mail receiving part of MTA. When a new connection is made by remote server to your MTA, your MTA will give you an opportunity to accept of reject the mail every step of the way from new connection, reception of each header, and reception of body.

WHY NOT SIEVE WHY MILTER ?

I found sieve to be rather limited. It doesn’t offer too many options to implement complex logic. It was purposefully made like that. Also sieve starts at the end of mail reception process after mail is already accepted by MTA.

Coding milter program in your favorite programming language gives you full power and allows you to implement complex , creative stuff.

WATCHOUT!!!

When writing milter programs take proper care to return a reply to MTA quickly. Don’t do long running tasks in milter program when the MTA is waiting for reply. This will have crazy side effects like remote parties submitting same mail multiple time filling up your inbox.

Introduction

Cognito is the AWS solution for managing user profiles, and Federated Identities help keep track of your users across multiple logins. Integrated into the AWS ecosystem, AWS Cognito opens up a world of possibility for advanced front end development as Cognito+IAM roles give you selective secure access to other AWS services.

Go to AWS Cognito on the AWS console to get started!

Initial Setup — Cognito

AWS Cognito

We will be setting up AWS Cognito, which is a custom login pool (such as login with email). Cognito IS NOT a login manager for any type of login (such as Facebook and Gmail), only for custom logins.

Let’s first make a user pool by clicking on “Manage your User Pools”. A user pool is a group of users that fulfill the same designation. The setup screen should look like this:

User Pool Name

We’re gonna walk through this process step by step, so enter the Pool name of “App_Users” and click “Step through settings”. The next step is “Attributes”, where we define the attributes that our “App_Users” will have.

User Attributes

We now, we only want to have an email, password and “agentName”. The email is our unique identifier for a user and the password is a mandatory field (which is why you don’t see it in the list of standard attributes). We want users to be able to have a codename to go by, so let’s set up “agentName” is a custom attribute. We are only using “agentName” to show how to add custom attributes. Scroll down and you will see the option to add custom attributes.

Custom Attributes

As of the date this tutorial was written, you cannot go back and change the custom attributes (even though AWS appears to be able to), so be sure to get this right the first time! If you need to change attributes, you will have to create a new user pool. Hopefully AWS fixes this issue soon. Anyways, moving on to account policies!

Account Policies

So we can see here that our passwords can be enforced to require certain characters. Obviously requiring a mix of various character types would be more secure, but users often don’t like that. For a middle ground, lets just require the password to be 8+ characters in length, and include at least 1 number. We also want users to be able to sign themselves up. The other parts are not so important, so let’s move onto the next step: verifications.

Account Verifications

This part is cool, we can easily integrate multi-factor authentication (MFA). This means users must sign up with an email as well as another form of authentication such as a phone number. A PIN would be sent to that phone number and the user would use it to verify their account. We won’t be using MFA in this tutorial, just email verification. Set MFA to “off” and check only “Email” as a verification method. We can leave the “AppUsers-SMS-Role” (IAM role) that has been filled in, as we won’t be using it but may use it in the future. Cognito uses that IAM role to be authorized to send SMS text messages used in MFA. Since we’re not using MFA, we can move on to: Message Customizations.

Custom Account Messages

This part is cool, we can easily integrate multi-factor authentication (MFA). This means users must sign up with an email as well as another form of authentication such as a phone number. A PIN would be sent to that phone number and the user would use it to verify their account. We won’t be using MFA in this tutorial, just email verification. Set MFA to “off” and check only “Email” as a verification method. We can leave the “AppUsers-SMS-Role” (IAM role) that has been filled in, as we won’t be using it but may use it in the future. Cognito uses that IAM role to be authorized to send SMS text messages used in MFA. Since we’re not using MFA, we can move on to: Message Customizations.

Custom Account Messages

When users receive their account verification emails, we can specify what goes into that email. Here we have made a custom email and programmatically placed in the verification PIN represented as {####}. Unfortunately we can’t pass in other variables such as a verification link. To accomplish this, we would have to use a combination of AWS Lambda and AWS SES.

SES (Simple Email Service)

Next click “Verify a New Address”, and enter the email you would like to verify.

Now login to your email and open the email from AWS. Click the link inside the email to verify, and you will be redirected to the AWS SES page again. You have successfully verified an email! That was easy.

Now that’s done, let’s return back to AWS Cognito and move on to: Tags.

User Pool Tags

It is not mandatory to add tags to a user pool, but it is definitely useful for managing many AWS services. Let’s just add a tag for ‘AppName’ and set it to a value of ‘MyApp’. We can now move on to: Devices.

Devices

We can opt to remember our user’s devices. I usually select “Always” because remembering user devices is both free and requires no coding on our part. The information is useful too, so why not? Next step: Apps.

Apps

We want certain apps to have access to our user pool. These apps are not present anywhere else on the AWS ecosystem, which means when we create an “app”, it is a Cognito-only identifier. Apps are useful because we can have multiple apps accessing the same user pool (imagine an Uber clone app, and a complimentary Driving Test Practice App). We will set the refresh token to 30 days, which means each login attempt will return a refresh token that we can use for authentication instead of logging in every time. We un-click “Generate Client Secret” because we intend to log into our user pool from the front end instead of back end (ergo, we cannot keep secrets on the front end because that is insecure). Click “Create App” and then “Next Step” to move on to: Triggers.

Triggers

We can trigger various actions in the user authentication and setup flow. Remember how we said we can create more complex account verification emails using AWS Lambda and AWS SES? This is where we would set that up. For the scope of this tutorial, we will not be using any AWS Lambda triggers. Let’s move on to the final step: Review.

Review

Here we review all the setup configurations we have made. If you are sure about this info, click “Create Pool” and our Cognito User Pool will be generated!

Take note of the Pool Id us-east-1_6i5p2Fwao in the Pool details tab.

Notice the Pool Id

And the App client id 5jr0qvudipsikhk2n1ltcq684b in the Apps tab. We will need both of these in our client side app.

Notice the App client id

Now that Cognito is set up, we can set up Federated Identities for multiple login providers. In this tutorial we do not cover the specifics of FB Login as it is not within in the scope of this tutorial series. However, integrating FB Login is super easy and we will show how it’s done in the below section.

Initial Setup — Federated Identities

AWS Cognito

Next we want to setup “Federated Identities”. If we have an app that allows multiple login providers (Amazon Cognito, Facebook, Gmail..etc) to the same user, we would use Federated Identities to centralize all these logins. In this tutorial, we will be using both our Amazon Cognito login, as well as a potential Facebook Login. Go to Federated Identities and begin the process to create a new identity pool. Give it an appropriate name.

Create a new identity pool

Now expand the “Authentication providers” section and you will see the below screen. Under Cognito, we are going to add the Cognito User Pool that we just created. Copy and paste the User Pool ID and App Client ID that we made note of earlier.

Authentication providers

And if we wanted Facebook login for the same user identity pool, we can go to the Facebook tab and simply enter our Facebook App ID. That’s all there is to it on the AWS console!

Facebook tab

Save the identity pool and you will be redirected to the below screen where IAM roles are created to represent the Federated Identity Pool. The unauthenticated IAM role is for non-logged in users, and the authenticated version is for logged in users. We can grant these IAM roles permission to access other AWS resources like S3 buckets and such. That is how we achieve greater security by integrating our app throughout the AWS ecosystem. Continue to finish creating this Identity Pool.

IAM roles

You should now see the below screen after successfully creating the identity pool. You now only need to make note of 1 thing which is the Identity Pool ID (i.e. us-east-1:65bd1e7d-546c-4f8c-b1bc-9e3e571cfaa7) which we will use later in our code. Great!

Sample code

Exit everything and go back to the AWS Cognito main screen. If we enter the Cognito section or the Federated Identities section, we see that we have the 2 necessary pools set up. AWS Cognito and AWS Federated Identities are ready to go!

AWS CognitoAWS Federated Identities

That’s all for set up! With these 2 pools we can integrate the rest of our code into Amazon’s complete authentication service and achieve top tier user management.

Most of people in this generation would spend their time daily to send and receive emails. E-mail plays a vital role in our daily activities. As technology evolved email became the one of major communication tool. It may not cost you to send or receive a mail but there is an intricate process involved behind. You just need to press send button to send your message, your message goes through the complex machanism to get received by recipient.

The invention of email was started in 1961. There were more than couple of people to list as the inventors of email.

The evolution of email started from messaging users on the same computer, then message transition between computer then message among multi user and multi computer, finally email. Email has became revolutionary tool for communication. Now a days it is part of our daily life. We will see how email really works.

Terminology

There are different components involved in the email system. There are few abbreviated terms. Having idea about terminology and abbreviations will help you to better understand the system.

MSA

Mail Submission Agent

MTA

Mail Transfer Agent

MDA

Mail Delivery Agent

MX

Mail Exchange

DNS

Domain Name System

MUA is the software that user will use either to send or retrieve the mail(message) from the server.

MSA is the piece of software which is installed on mail server. Where it is responsible to transfer the message to destination or mail server called MTA (Mail Transfer Agent)

MTA is the Mail Transfer Agent. It is the piece software on the server which is responsible to route the mail to destination mail server. So we called it as mail router, mail server etc. Here are few popular MTA(Mail Server) softwares : postfix, qmail, Courier Mail and sendmail.

postfix is the one which is widely used, and it comes with many linux distributions.

Protocols involved

There are different protocol involved in email system. All of them are required to get email delivered to the recipient. They are building blocks of email system. Those protocols are,

SMTP

Simple Mail Transport Protocol

IMAP

Internet Message Access Protocol

POP

Post Office Protocol

DNS

Domain Name System (Protocol)

How does email work?

This sections will get you the idea about how these protocols work together to deliver the mail to recipient. Here is the abstract overview of email system.

How does email work abstract

The above figure will give you simple abstract overview of email system. As described in the figure. SMTP is the protocol which is used to push or send an email to server by sender. IMAP and POP are the protocols which are used to check or retrieve message from server by recipient. The recipient MUA is configured to use either IMAP or POP or BOTH (IMAP & POP). The protocol IMPA is bidirectional where POP is unidirectional. We will see more about POP and IMAP in later section.

As per the above figure(abstract overview) you can sense that, the sender email client(MUA) will send message using SMPT protocol to mail server (MTA). The mail server will check for the destination if it finds the one it will connect to destination mail server, that is other MTA/MDA and will pass the message(mail) using SMTP protocol. The recipient server will store the received mail locally. Later if recipient will check the mail by using his mail client by connecting to his mail server.

Now let’s see flow of email in detail with following figure

Here Alice is sending email to Bob(bob@domain.com) by using her email client. Where she will push her message to server using SMTP to send Bob. The mail server will determine the destination by getting the MX record for the destination server. The Alice mail server will check for the domain after @ in the to(recipient) mail address to get the MX record. Once Alice(sender) mail server received the result from DNS for the DNS query to get MX record, this server will connect to destination mail server and will delivery the mail (message) using SMTP protocol. The destination mail server will store the message locally. Bob will check the newly arrived mails by using his mail client.

I would like to illustrate this process even more by using following figure.

In the above figure there are new components introduced, those are MSA, MDA and extra MX servers.

MSA is the Mail Submission agent where it is piece of software it will receive the message from MUA. It uses the same protocol SMTP and port 25. Practically most MTAs perform the function of MSA so you may assume MTA as MSA.

The MDA means mail delivery agent or message delivery agent is a computer software component that is responsible for the delivery of e-mail messages to a local recipient’s mailbox. You may not need to get confused, most of MTAs perform the MDA functionality as well though there are softwares which are only designed to work as MDA.

Here, you can see that practically there will be more than 1 mail server(MX SERVER)s. All other MX servers are for backup purpose. When sender query DNS for MX record it may get more that one MX server with priorities. Here is the sample output of DNS query for gmail MX records. Gmail is having 5 MX servers with different priorities.

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

$dig gmail.comMX

;<<>>DiG9.8.3-P1<<>>gmail.comMX

;;global options:+cmd

;;Got answer:

;;->>HEADER<<-opcode:QUERY,status:NOERROR,id:7760

;;flags:qr rd ra;QUERY:1,ANSWER:5,AUTHORITY:0,ADDITIONAL:0

;;QUESTION SECTION:

;gmail.com.INMX

;;ANSWER SECTION:

gmail.com.1206INMX20alt2.gmail-smtp-in.l.google.com.

gmail.com.1206INMX5gmail-smtp-in.l.google.com.

gmail.com.1206INMX10alt1.gmail-smtp-in.l.google.com.

gmail.com.1206INMX40alt4.gmail-smtp-in.l.google.com.

gmail.com.1206INMX30alt3.gmail-smtp-in.l.google.com.

;;Query time:18msec

;;SERVER:192.168.4.1#53(192.168.4.1)

;;WHEN:Sun Feb1409:42:232016

;;MSG SIZE rcvd:150

The above result is the out put of command line utility dig. Where we use this utility to query the DNS. In the output you would see something like below in the right side of ANSWER SECTION.

1

2

3

4

5

20alt2.gmail-smtp-in.l.google.com.

5gmail-smtp-in.l.google.com.

40alt4.gmail-smtp-in.l.google.com.

10alt1.gmail-smtp-in.l.google.com.

30alt3.gmail-smtp-in.l.google.com.

Here you would see some numbers in front of domains. Those numbers will decide the priority of MX record. The domain which is having very lowest number associated will have highest priority. Here gmail-smtp-in.l.google.com MX record having highest priority cause it having lowest number associated, that is 5. So the sender MTA will try to connect with most prioritized MX first. If it is down or so, it will try to connect with next MX (Mail Exchange server). As a result mail will be delivered with no down time.

Once the mail is received by the destination mail server, it will store the message in the mail store. There are two types of mail stores used by various mail servers(softwares). Those are,

Mail DIR

Mail Box

Locally stored mail will be accessed(fetched) by the recipient client using either POP or IMAP.

Protocols POP and IMAP

POP and IMAP are application layer protocols, as described above POP and IMAP are the protocols which are used to fetch or access the mails by recipient email client (MUA). Both protocols are different, they server for different purpose.

The protocol POP means Post Office Protocol and POP3 is it’s version 3. As the name describes, this protocol is used to download the message from the server by the client. Once message is downloaded form the server, it will be removed unless you set the flag leave a copy on server just how post card is delivered to destination. If you are only the one who will access mailbox from one location the POP3 suites well. This will also save some memory on server. So, if use POP3 you can’t access the mail using different clients.

Unlike POP3 the IMAP won’t download the message by deleting it on server. It will just access the message like browser does webpages. So it is handy if you use multiple clients from different locations.

Twisted features a decorator named inlineCallbacks which allows you to work with deferreds without writing callback functions.

This is done by writing your code as generators, which yield deferreds instead of attaching callbacks.

Consider the following function written in the traditional deferred style:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

import txredisapi asredis

from twisted.internet import defer

from twisted.internet import reactor

def main():

rc=redis.Connection()

print rc

rc.addCallback(onSuccess)

rc.addErrback(onFail)

def onSuccess(result):

print"Success : "

print result

def onFail(result):

print"Fail : "

print result

if__name__=='__main__':

main()

reactor.run()

using inlineCallbacks, we can write this as:

from twisted.internet.defer import inlineCallbacks

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

import txredisapi asredis

from twisted.internet import defer

from twisted.internet import reactor

@defer.inlineCallbacks

def main():

rc=yield redis.Connection()

print rc

yield rc.set("abc","pqr")

v=yield rc.get("abc")

print"value : ",repr(v)

yield rc.disconnect()

if__name__=="__main__":

main()

reactor.run()

Instead of calling addCallback on the deferred returned by redis.Connection, we yield it. this causes Twisted to return the deferred‘s result to us.

Though the inlineCallbacks looks like synchronous code, which blocks while waiting for the request to finish, each yield statement allows other code to run while waiting for the deferred being yielded to fire.

inlineCallbacks become even more powerful when dealing with complex control flow and error handling.