]]>Amazon Web Services fundamental concepts for absolute beginners (Q&A approach) – Since the importance of Cloud Computing, in particular Amazon AWS, raising on daily basis, I decided to create a Q&A post to explain some of the most fundamental concepts of Amazon Web Services (AWS) for those who have no prior experience by any means.

This guide is intended for absolute beginners and is specifically designed for those that want to get started with AWS. If you have any prior knowledge about AWS reading this post equals to waste of your time so I highly recommend to skip it if you are familiar with the basic AWS concepts and terminologies.

What is AWS? It is cloud service stands for Amazon Web Services.

What does AWS provide? AWS is a Infrastructure as a Service (IaaS).

What is PaaS? Platform as a Service.

What is SaaS? Software as a Service.

What are the differences between SaaS, PaaS, and IaaS?

SaaS provides a software in cloud and gives a link to you to access to it but the management of the app is not in your hands and it is done by a third party services. Such services are Google Apps, Box, Dropbox.

PaaS is the lower level than SaaS in which an environment provided to deploy applications however, there is no need to manage about infrastructure issues such as networking, OS, and hardware. The main focus of PaaS is to leave users to focus only on the application development and the business logic of the application. Examples, Google App Engine, Heroku, Openshift Red Hat.

IaaS is the fundamental building blocks for cloud services which is highly automated, self-provisioned, metered, and available on-demand. IaaS offers services via dashboards and APIs. IaaS clients have direct access to servers and storage like the traditional way but with much higher order of scalability. IaaS is the most flexible cloud computing model and allows for automated deployment of servers, processing power, storage, and networking.

What is VPC? It stands for Virtual Private Cloud. It is like your Facebook homepage that you have full control of setting the security level and telling who can access and what permissions they have.

What is a VPC used for in AWS? In AWS VPC we can put AWS EC2, AWS RDS but usually S3 is out of VPC. In other words, AWS VPC is a private section of AWS, where you can place AWS resource, and allow/restrict access to them.

Who is number one AWS customer ? Netflix is hosted on AWS and it is number one customer of Amazon.

What is Amazon EC2? It stands for Elastic Compute Cloud. It is like a server (instance). The common use of EC2 instance is for web hosting, or processing activity.

What is Amazon RDS? It stands for Relational Database Service. It is like a database server. The common uses of RDS are:

Customer account information

Inventory catalog

What does it mean AWS is highly scalable? AWS provides high scalability means when traffic to a service is increasing, AWS is automatically provisioning new instances (e.g., EC2 instances) to handle the load.

What does it mean AWS is highly elastic? AWS has elasticity as well. This means when load to a service decreases, Amazon automatically decommissions the instances from the VPC.

What is Amazon S3? It stands for Simple Storage Service that allows users to save their files there and access them anywhere, anytime. It can be seen as a large unlimited bucket. S3 common uses are: (1) Mass storage (2) Long-term storage. Dropbox is a nice backup service that is built on top of S3.

What is an AWS region? It is a data center in a part of the globe where providing different services such as EC2, S3, RDS, etc.

What is availability zones in an AWS region? Each AWS region has a number of availability zones. An availability zone is a physical location that holds an AWS data center. For instance if Oregon has availability of 3 means in Oregon zone there are three different physical data centers and if one fails the other two have your data. As a result, you don’t lose your data at all.

]]>http://blog.madadipouya.com/2017/09/10/amazon-web-services-fundamental-concepts-for-absolute-beginners-qa-approach/feed/02020Dead of the great Openshift 2http://blog.madadipouya.com/2017/09/07/dead-of-the-great-openshift-2/
http://blog.madadipouya.com/2017/09/07/dead-of-the-great-openshift-2/#respondThu, 07 Sep 2017 15:40:11 +0000http://blog.madadipouya.com/?p=2009Dead of the great Openshift 2 – as many of you may have known by now, Redhat is discontinuing Openshift v2 by the end of September 2017 for free users. They have already informed customers via email and publish the news on the site, click here. The death of Openshift … Continue Reading →

]]>Dead of the great Openshift 2 – as many of you may have known by now, Redhat is discontinuing Openshift v2 by the end of September 2017 for free users. They have already informed customers via email and publish the news on the site, click here.

The death of Openshift v2 was quite predictable as the new platform, Openshift 3, is more capable of doing things and are built on top of the most recent technology stack. Additionally, they have not updated the client app of v2, rhc, for more than a year and anyone who has tried to work with that experienced bunch of weird and annoying Ruby warnings due to use of deprecated libraries.

I personally tried the newer platform. It is amazing and very well structured. It uses Docker extensively and built on top of Kubernetes, further reading here. The new infra is suited very well for large businesses and not limited to start ups or SMEs. Its simplicity to provision instances can actually compete with AWS to some extends in my opinion.

Despite all of the v3 goodies, I will miss one and only one feature of Openshift v2 and that is the ability to run your app, service 24/7 on free tiers. The feature was one of biggest winning point of Openshift in my opinion when it has launched initially that attracted many open source developers. Developers could publicly offer their services for free backed by Openshift v2 and they just needed to do minimal housekeeping from time to time.
An awesome option for those passionate developers that do not have any sorts of income from their hobby projects and just doing things for the sake of making the world a better place to live.

Even though I don’t declare myself a real hacker and passionate developer, I had a couple of side open source projects running on the Openshift v2 platform and not paying a single cent for any of them since I am not making a single cent out of them either. This was quite natural for me as well as others. Now we all have to look for a new house for our open source projects or just abandon them.

Despite all the limitations of v2, I was quite happy and managed to handle great amount of load on free tiers with no issue. For instance, one my (most important) side project that was relatively successful is Simple Weather Indicator that its back-end, Eris service, was/still is running on Openshift v2 free instance and handling between 1.5K to 2K load per day pretty awesome. Now with the dead of v2 the project has no place to be hosted yet, thanks to all crappy sleeping strategies of PaaS providers, including Openshift v3 as well as Heroku, included in their T&C.
As a result, I am simply unable to run my service 24/7 and if I don’t find any free PaaS provider that allows me to run my open source project all the day, all the time, I guess I will eventually have to declare the EOL of Eris. However, Eris is not the only project that is going to affect, many others that have lighter use will be dead as well, you can see the full list at project page.

However, one can argue that free tiers may be abused for malicious activities such as phishing, and preventing to run an instance 24/7 reduces such chances.
I strongly disagree with such a mindset. First of all, I believe performing any illegal activities (spamming, phishing, cracking) does not require a 24/7 instance. Indeed the instance just needed to be alive on demand. Let’s think about a simple scenario of sending junk emails contain malicious links to people in order to crack their accounts via executing a bad Javascript code in the provided link. Such an activity does not need a live running instance by any means. The instance that hosted the malicious code can start up and response to a request, in this case user clicks on the link in the email, and then shutdowns automatically after a period of time of inactivity. Hence, disallowing to run an instance for 24/7 is not harmful at all for those who are using it for malicious purposes.
Secondly, misuse of a system does not mean the provider should limit the usage. This sounds ridiculous to me. Assume, all the email providers stop offering free email services because some people tried to abuse the system. Does it sound logical or rational? I don’t think so.

I find only a single reason behind imposing more limitations on free tiers in PaaS providers and that is MONEY. This can be seen from two angles. (1) Saving money (data center, energy consumption, computation power, etc.) (2) Earning money by perusing (pushing) free tier users to subscribe to paid schema. I have no point to argue against the above points because the first point is a common sense and the second one is a marketing strategy regardless whether a person like me feels good or bad about it. It is what it is. But, let’s not forget that we are all standing on the shoulder of giants who made sacrifices in their lives for us to have a better future and if any of them wanted to keep their own benefits in priority we wouldn’t have many things that we have now. To elaborate my point more precisely, everybody in their life should make some sacrifices to push the future of humanity to the next level, but when the giant companies merely think about their own benefits they make such a poisonous environment that nobody is willing to help anymore.

To conclude this post I believe the open source community is not in a good position today as it was previously. Many big companies namely contribute to open source projects or open source their projects but usually abuse the model in many ways for their own profit and has no moral compass or sympathy towards others’ hard work. It seems that open sourcing is a marketing and economical strategy to reduce cost and impact the market as open sourcing is a trend now. How to reduce the cost? By having some passionate people to contribute to projects for free out of their own time and the company doesn’t return back anything to the community.

At last, if anyone knows/can help me to host my Eris service and being able to run it 24/7, please send an email to kasra@madadipouya.com

]]>http://blog.madadipouya.com/2017/09/07/dead-of-the-great-openshift-2/feed/02009Good news! The blog content is fully restoredhttp://blog.madadipouya.com/2017/08/27/good-news-blog-content-fully-restored/
http://blog.madadipouya.com/2017/08/27/good-news-blog-content-fully-restored/#respondSun, 27 Aug 2017 23:48:04 +0000http://blog.madadipouya.com/?p=2006If you remember my first post of 2017, here, was about partial loss of my blog contents due to hard disk failure in of the server I used to host my blog. Long story short, I lost all 2016 posts, but was lucky enough to have them saved in a … Continue Reading →

]]>If you remember my first post of 2017, here, was about partial loss of my blog contents due to hard disk failure in of the server I used to host my blog. Long story short, I lost all 2016 posts, but was lucky enough to have them saved in a raw format in my Google Drive and some could be retrieved from WordPress.com. Since then I gradually started to recover/re-add the posts and make the old links workable again to minimally affect the site SEO. However, the process was not very simple, as I had to recapture many pictures and redo things again. Believe me, it was not fun and easy at all.

Finally, after nine months of slow work in progress, finally all the posts recovered. Of course the process took longer than what I anticipated initially. I have to admit I didn’t spend enough time and was quite frustrated with the process. That is the main reason I didn’t post for sometimes in February, May, and June.

Now all the contents are in place, I am quite happy, and there is no reason to fuss about it. However, the main reason of writing this post was to remind myself and others to backup your data to avoid being in catastrophic situations. For me, even though the size of disaster was fairly small took nine months to fully restore the contents which is exactly equal to the time of delivering a baby

]]>http://blog.madadipouya.com/2017/08/27/good-news-blog-content-fully-restored/feed/02006Enable MySQL query logging in Ubuntuhttp://blog.madadipouya.com/2017/08/06/enable-mysql-query-logging-in-ubuntu/
http://blog.madadipouya.com/2017/08/06/enable-mysql-query-logging-in-ubuntu/#respondSun, 06 Aug 2017 04:39:21 +0000http://blog.madadipouya.com/?p=1950In this post, I explain about how to enable MySQL query logging for all queries. To enable query logging, historically you needed to edit my.cnf file under /etc/mysql path. However, the path and the file name is not valid anymore. In the newer versions, you need to edit mysqld.cnf that … Continue Reading →

]]>In this post, I explain about how to enable MySQL query logging for all queries.

To enable query logging, historically you needed to edit my.cnf file under /etc/mysql path. However, the path and the file name is not valid anymore. In the newer versions, you need to edit mysqld.cnf that is located at /etc/mysql/mysql.conf.d.

To do so you need to open the file:

$ sudo vim /etc/mysql/mysql.conf.d/mysqld.cnf

And uncomment the following two lines:

general_log_file = /tmp/mysql.log
general_log = 1

In the above example I select /tmp/mysql.log as the filename and the path. This is not something fixed, you can still change them based on your preference.

The last step is to save the file and restart the MySQL service.

$ sudo /etc/init.d/mysqld restart

Keep in mind that query logging should be only enabled for troubleshooting and diagnosis purposes and not on production because it slows down MySQL performance and causes numerous issues.

]]>http://blog.madadipouya.com/2017/08/06/enable-mysql-query-logging-in-ubuntu/feed/01950Running Jenkins in local, what I learnedhttp://blog.madadipouya.com/2017/07/23/running-jenkins-in-local-what-i-learned/
http://blog.madadipouya.com/2017/07/23/running-jenkins-in-local-what-i-learned/#respondSun, 23 Jul 2017 05:29:52 +0000http://blog.madadipouya.com/?p=1940Jenkins is an open source automation server written in Java and is highly valuable when it comes to Continuous Integration (CI) and Continuous Delivery (CD). I started to explore more about Jenkins in the recent months, even though previously I was its end-user. It is quite amazing and there are … Continue Reading →

]]>Jenkins is an open source automation server written in Java and is highly valuable when it comes to Continuous Integration (CI) and Continuous Delivery (CD).

I started to explore more about Jenkins in the recent months, even though previously I was its end-user. It is quite amazing and there are many things to learn. Here, I summarized few things I have learned while I was running Jenkins in my local.

Installation

There are two options available when it comes to installing Jenkins in Linux, Ubuntu in my case.

Installing via apt, yum or zippy.

Download from Jenkins site (war file, deb file, etc.).

My advice if you don’t need have any restrictions, don’t even bother to go with the first option. Installing via package manager sometimes is troublesome.

Start/Stop/Restart Jenkins manually

Like any other services, Jenkins service is available at /etc/init.d/jenkins. So you can easily restart, start, stop it.

$ sudo /etc/init.d/jenkins stop/start/restart

If you got Not Configured to run standalone error, just edit the file at /etc/default/jenkins and set RUN_STANDALONE to true.

RUN_STANDALONE=true

Disable Jenkins autostart (Ubuntu)

Edit the file /etc/default/jenkins and revert RUN_STANDALONE to false

RUN_STANDALONE=false

Change Jenkins default port

Just edit the file located at /etc/default/jenkins and then change HTTP_PORT. For instance,

HTTP_PORT=8888

Then restart Jenkins service.

Recovering admin password

If you have forgotten your initial Jenkins admin password, no worries with two commands you can recover it easily,

]]>http://blog.madadipouya.com/2017/07/23/running-jenkins-in-local-what-i-learned/feed/01940How to install pip3 in Ubuntu 16.04http://blog.madadipouya.com/2017/07/16/how-to-install-pip3-in-ubuntu-16-04/
http://blog.madadipouya.com/2017/07/16/how-to-install-pip3-in-ubuntu-16-04/#respondSun, 16 Jul 2017 06:24:04 +0000http://blog.madadipouya.com/?p=1934The default pip version for Ubuntu 16.04 is 2.7 which is quite outdated. For instance, it is impossible to get mpsyt to work with pip 2.7. Any attempt to install pip3 using sudo apt install python3-pip will be resulted in getting this message: Fortunately, there is an easy way to … Continue Reading →

]]>The default pip version for Ubuntu 16.04 is 2.7 which is quite outdated. For instance, it is impossible to get mpsyt to work with pip 2.7. Any attempt to install pip3 using sudo apt install python3-pip will be resulted in getting this message:

]]>http://blog.madadipouya.com/2017/07/16/how-to-install-pip3-in-ubuntu-16-04/feed/01934Writing Persian in Markdown and converting it to different format by Pandochttp://blog.madadipouya.com/2017/04/24/writing-persian-in-markdown-and-converting-it-to-different-format-by-pandoc/
http://blog.madadipouya.com/2017/04/24/writing-persian-in-markdown-and-converting-it-to-different-format-by-pandoc/#respondMon, 24 Apr 2017 01:57:36 +0000http://blog.madadipouya.com/?p=1880Writing Persian in Markdown and converting it to different formats using Pandoc is not always hassle free. In fact few steps are involved to get everything up and running which described in this post. Basically three steps should be done to setup everything to be able to convert written Persian … Continue Reading →

]]>Writing Persian in Markdown and converting it to different formats using Pandoc is not always hassle free. In fact few steps are involved to get everything up and running which described in this post.

Basically three steps should be done to setup everything to be able to convert written Persian text in Markdown to different formats using Pandoc. They are as follow:

Adding annotation to Markdown files containing Persian text

Persian font installation

Run Pandoc with proper parameters

Without taking the above steps any attempt to convert Persian Markdown text to other formats will be failed with either Pandoc errors or the end results won’t be desirable, such as separated characters or even empty documents.

It is good to note that these steps are only tested in Ubuntu distribution and may slightly vary in different flavors.

This instruction can be applied to add Arabic language support to Pandoc as well, only step two will be different.

Step One: Adding annotation to Markdown files containing Persian text

For each Markdown file that contains Persian or Arabic (generally any right to left language) contents, the following annotation should be added to the first few lines of the file before the actual content.

---
dir: rtl
---

This is necessary to explicitly tell Pandoc to set text direction right to left.

Step Two: Persian font installation

To be able to get desired output for a Markdown file using Pandoc, proper Persian fonts should be installed. So, Pandoc can utilize them to render the text to the destination format, let say PDF.

This step can be skipped, if the system has proper Persian fonts installed.

Unfortunately, Ubuntu does not provide great Persian fonts that can be supported by Pandoc, even though they are used comfortably and flawlessly in applications such as LibreOffice. Hence, installation of a set of proper Persian fonts is a must.

On the bright side, there is a great script that automates this task which is called persian-fonts-linux from Fzerorubigd.

In the above command the first and second arguments set the input file name and the output file name with its type which are very general. The specific yet important parameters to this use case are --latex-engine=xelatex and -V mainfont='BNazanin'. The former sets Latex engine to xelatex since the default engine does not support Persian and any attempts to compile with the default engine will be resulted in errors. The latter parameter guides Pandoc to use BNazanin font to render the output. Basically, the previous step is done just to acquire necessary fonts. The absence of this parameters will usually be resulted in blank text or separated characters. But there is flexibility in font selection. As an instance, ‘BZar’, ‘BNasim’, ‘BMitra’, and other variety of fonts can be used. BNazanin just selected for demonstration only in the above example.

]]>http://blog.madadipouya.com/2017/04/24/writing-persian-in-markdown-and-converting-it-to-different-format-by-pandoc/feed/018804 years on the road, th journey of valuable experiencehttp://blog.madadipouya.com/2017/03/28/4-years-on-the-road-th-journey-of-valuable-experience/
http://blog.madadipouya.com/2017/03/28/4-years-on-the-road-th-journey-of-valuable-experience/#respondTue, 28 Mar 2017 15:25:41 +0000http://blog.madadipouya.com/?p=1876It is been four years since I published the first post of this blog. Prior to that I was skeptical about the purpose of having a personal blog as well as the nature of content to add. I had inner argument with myself to whether to have a blog or … Continue Reading →

]]>It is been four years since I published the first post of this blog. Prior to that I was skeptical about the purpose of having a personal blog as well as the nature of content to add. I had inner argument with myself to whether to have a blog or not. The feeling toward no answer was stronger because of a recent failure experience, even though the nature of that work was quite different.
I was unsure until to the level that I didn’t purchase my own domain. I experimentally begun to use .tk instead and that is where geeksweb coming from. Even the look and feel wasn’t like what you see now. It was totally different which discussed on different occasions. Simple and pure HTML, no CSS or JavaScript. I was following Stallman website style, read Design of Geeksweb.tk. In fact, after 4 years I still keep the first version up and running, of course no new content has been added for ages. But it is still accessible via blog.madadipouya.com/old.
The story got more serious when I got a free legitimate .eu.org after long waiting, few months. The domain didn’t have the problem of .tk and was indexing in Google hassle free. So that was the ignition of getting serious with the content I created.
Then in first quarter of 2014 I purchased my own domain name and moved all the content to a more reliable free host, Heliohost which failed in the last year November and caused some significant damage to the content as well as site rank, read the disaster description, Hello from hell.
In many occasions I was considering to abandon the blog but every time I was going back to it for many reasons. Still I am moving forward after four years. Matter of fact since I published the first post, I have been feeling constantly that is my responsibility to make this media up and running so anyone can use the content, if there is any reader ever.

Time passes too fast, not even very fast. It passes super duper too fast. It was just an hour ago that I was celebrating the 200th post of the blog. Those days are gone but the awesome memories are registered in my heart. Memories will never get old.

So I will keep going despite all the difficulties on the way in these days that blogging is nearly dead and replaced with YouTube channels. Not arguing which one is bad and which one is good, just blogs are not popular anymore and are partially replaced by social media and other sorts of media such as podcasts and videos.
So happy 4th birthday geeksweb/blog.madadipouya

]]>http://blog.madadipouya.com/2017/03/28/4-years-on-the-road-th-journey-of-valuable-experience/feed/01876Happy Nowruz 1396http://blog.madadipouya.com/2017/03/20/happy-nowruz-1396/
http://blog.madadipouya.com/2017/03/20/happy-nowruz-1396/#respondMon, 20 Mar 2017 14:18:44 +0000http://blog.madadipouya.com/?p=1869It is always great to find every possible opportunity to celebrate despite of nationality, religion, etc. Happy moments are great and precious that usually last short but registered for long time in our memories. And in short life of ours put smile on our faces numerous time, whether in that … Continue Reading →

]]>It is always great to find every possible opportunity to celebrate despite of nationality, religion, etc. Happy moments are great and precious that usually last short but registered for long time in our memories. And in short life of ours put smile on our faces numerous time, whether in that moment or reminding of a sweet happy moment memory
Today is beginning of the spring. 21th of March is consider Persian new year also known as Nowruz.
It is an ancient Persia celebration and I am proud and glad that I am a part of this great culture to celebrate this important event yearly. No matter where and how far from home country. It is registered to my heart and my soul. It is an inseparable part of me and my identity.
Not only me, but it is part of identity 11 countries’ people across the globe with different language, different skin color and so on. It is a part of identity of 187 million people. And I was lucky enough to be one of them, a drop of the ocean.

Anyhow, happy Nowruz everyone.
I am taking this moment to wish peace of mind, heart for everyone in the world. Hopefully this year (1396) the world sees more peaceful days than any other years. Hopefully, all the wars in any location will end. Hopefully, no more kids die because of war and/or because of poverty. And hopefully, all of us become a better person and more useful person for our world.
In case you do not know what is Nowruz have a look at this link:

]]>http://blog.madadipouya.com/2017/03/20/happy-nowruz-1396/feed/01869How to AMPtize your WordPress sitehttp://blog.madadipouya.com/2017/03/18/how-to-amptize-your-wordpress-site/
http://blog.madadipouya.com/2017/03/18/how-to-amptize-your-wordpress-site/#respondSat, 18 Mar 2017 15:24:07 +0000http://blog.madadipouya.com/?p=1860Earlier in this post, I introduced Accelerated Mobile Pages (AMP). There AMP technology and its features discussed. Current post aims at implementation of AMP for WordPress platform via plugins in easy steps for non-programmers and those who do not have experience in web designing. One of biggest advantages of using … Continue Reading →

]]>Earlier in this post, I introduced Accelerated Mobile Pages (AMP). There AMP technology and its features discussed.
Current post aims at implementation of AMP for WordPress platform via plugins in easy steps for non-programmers and those who do not have experience in web designing.
One of biggest advantages of using WordPress is the abundance of useful plugins for different purposes that make life easier. In a 2014 post I have discussed some of them, here.
As you may guess by now, there is an awesome plugin also available for AMP. The name of the plugin is AMP which is available to download at the official WordPress site, here.
Overall, the installation process and configuration is quite smooth which does not require any sort of expertise. In fact, the default configuration is quite standard and does not require any changes, except theme maybe.
After done with installation and configuration of AMP plugin, the target website should be AMP friendly. To see the AMP version of the site, just need to append /amp at the end of it. However, you must remember, this is not applicable to administrator pages.
As an instance, the AMP friendly URL of the latest post of this blog is this:http://blog.madadipouya.com/2017/03/11/auto-login-using-ftp-command-in-linux/amp/
Normal and AMP friendly version looks like this:

Normal VS AMP version

What’s the next?

The only step remains is to wait for sometimes (between few days to few months, depends on the site ranking) that Google indexes the AMP pages of the site. As a result, if the page is searched on any mobile browser which is AMP compatible, they will be redirected to the AMP version of the site. Below picture is a good example of AMP version of this blog post about Gson.

AMP page example

Note that the AMP pages load from Google CDN which is very fast and opens every page instantly. That’s reason why the URL of the blog post in the righthand image is started with google.com not blog.madadipouya.com

However, all the work is not done yet. If Yoast plugin is used for SEO, a great option is to add one of its extension called Yoast SEO AMP glue plugin. This plugin basically makes sure the default WordPress AMP plugin uses the proper Yoast SEO metadata and allows modification of the AMP page design. Hence, it is highly recommended to have the Yoast glue plugin.