Source code is what changes the world …

Python is a great programming language and I just love it. It’s easy to learn and it’s widely used in various fields of technology.

Right from web to backend applications, python has its place. It is also at the forefront of machine learning field, taking artificial intelligence and data analytics to a whole new level. Now is the right time to learn python and decided to gather all the free Udemy courses to help you get started.

Once you check these courses out, leave a comment sharing the course you have picked to start with and the reason for it.

If the courses that you want aren’t free, then don’t worry. We have a Sitewide-10dollars offer for a limited time.

NOTE : As most of the sale is over, these courses are NO longer free. You can still go ahead and purchase them as they are some of the best courses for python available online.

V) Introduction to python and basics for Beginners

Are you an intermediate or advanced python developer ?

The courses above though free are mostly aimed at beginners and structured that way. If you are serious about getting better at python then these courses will not help much. They will teach you what you already know. So, here are some paid courses that you can invest for yourself to get the best new knowledge in programming in python.

Intermediate course for python 1:

Intermediate course for python 2 :

Now that you have got a chance to view all the free python courses, comment on the one which you have picked and do share the reason for selecting it.

I am planning a series on image processing with python. The image processing would be helpful in machine learning tasks such as computer vision. It can also be used as a pre-processing step in applications like Optical CharacterRecognitions.

Following are the ideas I have. Please share the ones you think are important or useful in the comments below.

Image processing using Scikit-Image : This library has a lot of rich algorithms that help to process images. This involves tutorials on blurring, smoothening, de-noising, enhancing , histogram and perspective corrections and many more.

Image processing using Open CV : This would involve things like background and foreground separation, Identify faces in an image, etc.

Using numpy with a java Image processing library called ( Lucene Image retrieval ) for image features manipulation and representation. Various algorithms that work on global as well as local levels of an image can be applied to extract the image features.

using PIL for pixel extraction : PIL or pillow is used for performing various operations on images. RGB values extraction and other features like editing , resizing can be performed using PIL.

ImageMagick : A linux tool, used to convert images from one format to another. Also to generate logos, or resize/ scale or any other image manipulation APIs are provided by Imagemagick.

Apart from these , If you are interested in learning any other library or certain specific topics, comment and let us know and we will include it in our plan to publish articles related to that.

Steganography is the process of hiding text or files like images, documents etc within another file like an image, audio, some other text, etc. This technique was used by many groups of people to hide and send a secret message so that only The person to whom it would be delivered could know what it contains. To extract the message from the hidden files many different tools can be used.

Here is an example of steganography. Person A hides his personal details within a message using a steganography tool. Only he knows that an image has some text hidden within it. Anyone else who gets that image can only see the image but will not even have a clue that it contains some data. Now person A can get back his data using the same steganographic tool in reverse order.

Steganography is different from encryption. In the sense that, in encryption, no data is hidden but only converted or transformed into some other form of data depending on the algorithm used. But here we actually hide the data. Its up to the user whether he wants to encrypt it or not before he hides it.

I have built a simple tool , you can either use that to improve it or use the idea to build something better. So read further to learn more.

I wanted to try out steganography just like I do other projects in my free time. so I started with a project to create a steganography tool that would hide messages within a data. This tool is completely built in python
Here I call the data to be hidden as “message” and the data in which the message is hidden “base data”. So the base data that I chose is the lorem ipsum text.

Lorem Ipsum is generally used in the typeset industry to check the layout and font look. It is also used to randomly fill, in templates with data. So these sentences though widely used do not have any meaning.The reason I chose lorem ipsum as my base data is if configured right I would be able to send messages within template like format so no one would suspect that it would contain hidden messages. For a layman, it’s just some lorem ipsum data put up to see how a website looks when it gets real data.

So finally my project when finished, it could accept a data ,generate lorem ipsum sentences and hide data within it. It would also generate a key which would help the user to extract the message from the base data. This key is different for each message and also I encrypted the key. This tool also provides the option to store the key in the same file that contains the base data or the user can make a note of the key and send it to the recipient by some other means. Thus without that key, the data cannot be extracted.

Here is an example where I hide the message “may the force be with you” 😛 within the base data.

As you can see, it generates a key that you have to decide whether to manually store it somewhere or store it in the file with the base data.
below is the screenshot to show the base data in which the message is hidden.

So within that paragraph lies hidden the message.I also created a

I also created a visualization to demonstrate how the data will be stored. Here I have used 2 colors alternatively to differentiate between words. So, if the letters of the first word are showed in green then the letters of the second word will be in blue and so on it will alternate.

What are Wordnet, Hyponyms, and synonyms?

Wordnet is a large collection of words and vocabulary from the English language that are related to each other and are grouped in some way. That’s the reason WordNet is also called a lexical database.

WordNet groups nouns, adjectives, verbs which are similar and calls them synsets or synonyms. A group of synsets might belong to some other synset. For example, the synsets “Brick” and “concrete” belong to the synset “Construction Materials” or the synset “Brick” also belongs to another synset called “brickwork ” . In the example given, brick and concrete are called hyponyms of synset construction materials and also the synsets construction material and brickwork are called synonyms.

You can imagine wordnet as a tree, where synonyms are nodes on the same level and hyponyms are nodes lower than the current node.

What is nltk ?

Natural Language Toolkit (NLTK) is a python library to process human language. Not only does it have various features to help in natural language processing, it also comes with a lot of data and corpus that can be used. Wordnet is one such corpus provided by nltk data.

How to install nltk and Wordnet ?

Once nltk is downloaded, you can download wordnet using the nltk data interface. Follow the instructions given here.

How do you find all the synonyms and hyponyms of a given word ?

We can use the downloaded data along with nltk API to fetch the synonyms of a given word directly. To fetch all the hyponyms of a word, we would have to recursively navigate to each node and its synonyms in the wordnet hierarchy. Here is a python script to do that.

hypernyms are nothing but synsets above a given word. Getting all the hypo and hypernyms are also called ontology of a word. In the following example, the ontology for the word car is extracted.

Get all Hyponyms with synsetID

each synset has an Id which is nothing but the offset of that particular word in the list of all words. If you know the Id of a synset and want to find out the id of all the hyponyms instead of meanings and definitions, you can do this:

This tutorial shows how to quickly setup python scripts to search Wikipedia right form the terminal. This would be ideal for people who want to quickly read up on a topic without having to open a browser , wait for it to load and search. Terminal would do the job more quickly and efficiently.

The way to solve this is to use the Wikipedia API to send a http request to the website as a query action in json format and get the response is back as a json Object. This could be implemented using the request module in python.

The next step is to parse this response json object. I found that Beautiful Soup can be used to do that. This was one of the best options available . Once the parsing was complete, we only have to display the data.

I found 2 scripts to do just that. These scripts however don’t use Requests module but use urllib and urllib2.

Advantage :
The advantage of using this is that, only a brief summary of the topic under search will be showed. And most of the time it is the only thing we want.

The second thing is that if there are sections in the topic, it will be shown.

Disadvantage :What this lacks is that, it does not have a good pattern matching for searches, that is if it doesn’t ind the exact words in the article it will not be returned. This also happens if each result has multiple result.
Sometimes the data returned is either very less or a lot.

Procedure:

First download and save these 2 python file:wikipedia.py : This python Program is used to form the url for the search term to get the article from wikipedia.com. Once the URL is formed, we send a request using the urllib python library.We perform the search and get the whole data from Wikipedia.

wiki2plain.py : This program is used to convert the full document received from previous program to readable text format. Usually, the response from the previous program is in the form of json/html. Thus we use this program to parse the json/html and get meaningful data on the topic.

Then create a python file and name it wiki.py and paste the following script in it:

An early alarm. A hot cup of coffee. Getting ready. I looked out of the window, no sunrise yet. I had a train to catch at 5 30 am. I was very excited. It was the day my travel would begin. I would go from Bangalore to Mangalore but not as a straight journey. I had decided to visit several places along the way, to experience and enjoy nature and peace to its fullest.

This trip meant a lot to me. I would be away from traffic, away from noise and pollution and most importantly, away from all the stress. I am a happy traveler. I adjust to whatever comes along the way and I don’t get upset or complain because something did not work out as I had planned. After all, this travel was to relax and not to add more stress to myself. My itinerary had a few places along the way that I hadn’t visited before like Lion Safari near Shimoga, dams and rivers and other historical places.

Travel is so much more easier now with the internet helping us all along the way. Just with a few clicks, I was able to book my train tickets, book a room and get some information on places I could visit. I boarded the train to Mysore, my first stop. I reached there by 8 am and headed straight to the K.R.S dam. This dam built across the Kaveri river is majestic. being a weekday, crowd was literally non-existent. I stood at the tip of the dam staring into the vast amount of water stored. Water that would sustain agriculture and life in that region. birds soared high in the air and I just stood there with the breeze brushing past me. After a while, a walked through the Brindavan Gardens. I watched people fixing the fountains there. Probably preparing it for the musical fountain show they host on the weekends. One of them told me that its a spectacular event to watch. As I walked on, I found it satisfying to watch the canals that ran the length of the garden with the dam hanging in the background.

I saw the Mysore palace on my way to the Chamundi Hills. That palace never ceases to fascinate me with its rich heritage from the king’s era. How amazing might have been to stay there and rule a kingdom. I had been to the Palace before so I decided not to go there this time.It was noon by the time I reached the hills. It was very hot and wished I had come sooner. A huge statue of the demon Mahishasura and the towering Chamundi temple stood there at the peak. It was very cool inside the temple. No air coolers or air conditioners, just remarkable architecture that kept the temple cool. I had an amazing lunch there, the traditional South Indian meal on banana leaf. I sat down by the temple side for some time and then resumed my journey.

I then went to the railway station, en route to my second stop which was Shimoga, the gateway to the Western Ghats. These days, Domestic Airlines are highly affordable but I prefer train as most of the train journeys allow me to witness some great scenery through jungles and hills. It was night by the time I reached Shimoga. I checked into the hotel, thankful that it was exactly how they had described it online. I was up early the following morning, ready to cover all the places I had in mind. I first went for a Lion Safari. Its where the forest department would take tourists to see wild animals(mostly lions) in the sanctuary . the excitement we had died down as we found only well fed lions and elephants that just slept in the shade. I was a little disappointed with that. But it was a king cobra that suddenly appeared that managed to get everyone’s attention, including our guide’s.

I then went to Sagar, a small town near Shimoga and known for the Jog Falls. By the time I reached there, it was drizzling and the the water falls were partly covered by clouds. But the rains had filled up the Sharavathi river and it roared as if fell into the chasm. I felt so powerless being in front of this majestic source of energy. The clouds soon cleared and the view was spectacular. I walked down the steps that took me to the bottom of the falls and the lower I went, the cooler it got. the staircase goes very close to a small waterfall where people usually sit for sometime and let the mist from it soak them. Its very relaxing. At the very bottom, the roaring sound of water falling was deafening. A huge mist cloud was formed and I got curious as to whats behind the waterfall but the very thought of me drowning got me to just stay where I was. This was the best place on my trip so far and it definitely was for the rest of it.

From Jog, I joined a group of people and went to Honnemaradu, a modest settlement on the backwaters of River Sharavathi. It was very quiet there. We hired a boat that took us to a near by island. We sat on the boat’s edge and tried our had at fishing. I wasn’t fortunate enough to catch one though. It was six in the evening and I decided to go to Thirthalli and halt there for the night. Its a small village on the banks of river Tunga and the last place before I had to climb down the ghats to the coastal region. The following day, I caught the first Minibus to Udupi. The journey on ghats terrifies me. And this was the Agumbe Ghat, one of the steepest roads I’ve been on. I was extremely alert the whole way and kept checking in and out to make sure nothing seemed out of place. Finally after slow, scary thirty five minutes , we are out of the ghats and at sea level. People say Agumbe is one of of the most beautiful place to see and the scenery along the way is breathtaking. All that is fine but what people won’t tell you is that while you keep staring into the world outside, there is a chance the bus falls off the cliff killing everyone. Nope! No scenery is worth dying for.

Finally I reached Udupi. The city famous for the Krishna Temple. It had been two beautiful days so far and I was on the last leg of my travel. Each time I go somewhere , time just flies and just before I reach my destination, I get this feeling of ” This cannot be the end of my travel, I should go more”. I call this my “Last Mile Paradox”.

I visited the Krishna Temple. The priest told me fascinating stories about the temple and gave me a book about its origin. After a meal there, I left. On my way to Mangalore, my final destination, I realized I had to book a room to stay for the night. Thanks to internet, I went though a list of available rooms and booked the one which had very good reviews.Online booking is truly one of the best things to happen to travelers. I stumbled upon the International Flights section while booking my hotel, and man the air fares were very cheap. I thought to myself that the next trip I take should be out of India .As I traveled to the end of my trip, I put my earphones on and soaked in the memories I had from the past couple of days.

This blog post will be a guide to python resource, right from where you can start learning this amazing language to finding resources to c solve complex problems in the field of computer vision, big data , Natural Language processing, etc.

My aim is to refine these post to make it better each week, add more resources, add more information and eventually create a path for people to choose from. But, its gonna take time to reach there. Till then , I hope this continues to help you.

Please feel free to add your suggestions in the comments.

This post will be forever growing, so come back each week, to find more resources :

1) Where to begin learning Python ?

There are tons of resources out there but very few that teach you to use this language in the right way while showing you the power it has. Here is a list that has resonated well with me:

Head First Python : For absolute beginners who want to have a taste of all the things python can do , this is the right book . The advantages of learning from this book are:

It uses a project based approach, so you can see your progress visually with what you’ve achieved so far.

It dives straight to programming from the first chapter and has exercises in between chapters to help you make sure you understand concepts.

It covers various fields like standalone applications, web applications, mobile applications. So , you know the capability of python and where it can be used.

By the end of this book, you will be able to build applications on our own with very little help

The disadvantage of this is that, it’s not for people who prefer in-depth and in-
detail explanation of each concept.

If this is the right book for you, you can purchase it here :

Think Python : ( How to think like a computer scientist ) : This is a free e-book designed for those people who like to master the core-concepts, the syntax and features available in python. This focuses on introducing you to different programming concepts and how they can be effectively implemented. This book gets into each aspect of programming be it recursion or inheritance, etc in much detail than the “Head first Python ” book.

There is a hard cover book(link below) , the latest edition has more in depth
explanation, resources and updated with many more examples and real use case
scenarios.

Provides a guide to tools and libraries for mathematical computations, and also insights to data-analysis

You can buy the paperback from here :

Free Online Courses ( MOOCs) : This sections contain a few links to online courses for learning python. These courses are conducted by top universities from across the globe. These are structured to make sure that you get the best of resources available to develop your next big application. People who do not like to read books much can take this approach.

As of today, the term Search Engine is synonymous to Google. It’s the world’s largest and possibly the most efficient search engines of this time. I was always fascinated by it, but also the sheer complexity of it made me think of it as something extremely difficult to understand. And yes, it is a complex engine that has taken years and thousands of people to bring to a stage where it is right now. But this fascination of mine led me to discover a way to learn and use search Engines.

Thanks to on of my friends at work, I came across Apache Solr. Solr, is an open source search engine, based on Lucene which is an open source text based search engine library. This search engine is extremely powerful and there are some amazing implementation of it that is competing with Google. One such example is DuckDuckGo.

Solr is an enterprise grade search engine that is capable enough to crunch Terra Bytes of data without any problem ,provided there is enough resources allocated to it. So, I read about this and found that I could use it to learn more about how search engines to work. I came across a lot of new topics and some others which I had misconceptions about. Indexing, analyzers, tokenizers, searching with regular expressions and facet searches are some of the concepts related to it. I spent some time reading about it and soon enough I was able to index a csv file that had information about my music collections and was able to search for songs , albums, etc with the search results being generated in milliseconds.

Within no time, I had a search engine server running on my local machine and it was like a mini search engine for my music collection. This gave me an idea to dive more deeper into Solr and learn about all the things I could do with it. So, I decided to create a blog series on search engines to share and document the various things that I come across and learn.

I haven’t yet decided the number of posts that will come under this series, but I would like to keep each of them short with rich information. By the end of it, I aim to build a search engine for the music collection that can search songs based on several criteria and also give functionality to play a song directly from the search result screen.Somewhat like a web based music player.

If you have any ideas or things you would like to share or contribute, share it in the comments below.

If you are building your web application in Java, you might be using SpringMVC with Hibernate. Both these Frameworks are extremely powerful and lets you, the developer to focus on building the application itself rather than to worry about many other factors required to manage and maintain the project dependencies.

You might have designed a database model, and might have tables without a primary key. Tables which map multiple tables (a.k.a Mapping tables) , usually do not have a primary key. While this seems normal, there are situations when you will have to insert, or update values into the table and you will find it difficult to do so. Why ? That’s because without a key, there exists an ambiguity as to which row to update or delete. Fortunately, there is an easy way to solve this problem.

This is gonna be a long post. So, brace yourselves. I will try my best to make my point, but if you have any queries/ suggestions, feel free to let me know in the comments below. So, lets begin.

The answer to this issue is to use an Embedded Entity which provides a key to the persistent entity without a primary key. And these are the two annotations that will be required: @Embeddable and @Embedded .

Consider two persistent entities called Car.java and Color.java . These two have a Primary Key each and represent the tables “CAR” and “COLOR” tables in the “VEHICLE” database/schema.

For these two entities let there be another mapping entity called “CarColor.java”.This is the representation of the mapping table between “CAR” and “COLOR”. As this is a mapping table, it does not have any primary key. In Spring Hibernate scenario , the enity CarColor.java would be like:

This above example is correct, but as you would have seen, No primary key is there. So, if there are scenarios where you want to identify a record uniquely, you will find yourself in a bit of trouble. But, there is a way around it. If you have auto generated the entities, then this workaround is automatically implemented. If not, yo can do it yourself. Here is what needs to be done:

adding another entity called CarColorId.java. This class is NOT a persistent class. But this can be used to uniquely identify each record in the CarColor.java table. Here is the new Implementation of CarColor.java and also CarColorId.java .

Let’s analyse the new things that have been included in the above two classes. Firstly, you would have observed two new annotations : @Embeddable and @EmbeddedId . Here embeddable is used to indicate an entity that isn’t a persistent class but has persistent objects within them and these persistent objects can be used to gain an identity to the persistent class to which it forms an ID. @EmbeddedId is used to represent the object of the embeddable class.

Also, there is method in carColorId called hashCode() . hashcode() are methods are the ones which gives an integer value for a given object. So, the hashvalue of an object always remains same and a carefully created hashCode() is the one which gives a unique identity to an object of the persistent class. So, in our example, class CarColorId gives a unique Identity to our persistent class CarColor.

To perform operations on the CarColor table, one can find and compare objects of the type CarColor by making use of the hashCode() and using that value as a value that provides uniqueness to the objects of that class.

As an example, If you want to update an object of type CarColor, then first fetch the objects that match the criteria of car name and/or car color . Then apply the hashcode() on that object to check it matches the object that has to be updated. If it does, then perform the update operation.

So, this is it from me. Hope that you have found what you have been looking for. Feel free to comment regarding any questions or suggestions.

When a javascript client tries to consume data from another application or some resource on a server through a REST API, the server responds with a Access-Control-Allow-Origin response header to tell the client that the content of this page is accessible to certain origins. The origins can be any client that sends a request to the server to fetch some resource. The clients that are allowed to access can also be specified. But by default, clients are not allowed to fetch the resource from the server.

This Access-Control-Allow-Origin is a Cross Origin Resource Sharing (CORS) and the CORS filter must be implemented to send a response from the server while building RESTful Web Services. The way this works is, when a client makes a request for a resource, it sends the Origin header in the request. The server validates this origin and decides to allow the request or not. If it decides to allow, then it responds with the Access-Control-Allow-Origin in the header and then upon receiving this, the browser matches the origin and allows the request. If the browser finds that the origin matches, it allows the request to be completed, else it throws an error.

Here is an example of a GET request made to a REST service and the corresponding response given by the server. Here, the Origin matches the one mentioned by the server.

Thus, the request is allowed in the above case. If your server should allow requests from all origins, then you can set:Access-Control-Allow-Origin: “*”

Here the “*” indicates all origins to be allowed to complete their request.

If you are building a REST service in spring, you can create a simple or complex CORS filter. This filter will then help your server respond to request accordingly. Following is a simple program given by the Official Spring Documentation, which allows all origins to access a resource from your server.

This Program allows any kind of origin to send GET, POST, OPTIONS and DELETE requests and serves them accordingly. The Access-Control-Max-Age field makes sure that the access control is alive for 1 hour or 3600 seconds.

Without setting the CORS filter, any client , be it a web front end built using AngularJS or a simple JavaScript client will not be able to fetch the data. and you might get an error thrown by he browser.