Machine Learning is a sub-field of computer science and an area of intense current research. It has nothing to do with Python (a programming language) except that some machine learning algorithms might be implemented in Python.

"Introdocution to Statistical Learning" by Trevor Hastie et al. [1] They have a free online class through Stanford [2] Sign in to their system and you can take the archived version for free.

ISL is an excellent, free book, introducing you to ML, you can go deeper, but, to me this is where I wish I'd started. I am taking the Data Science track at Coursera (on Practical Machine Learning now) and I am kicking myself that I didn't start with ISL instead.

Now, I know you specifically asked about Python, but the concepts are bigger than the implementation. All of these techniques are available in Python's ML stack, scikit-learn, NumPy, pandas, etc. I don't know of the equivalent of ISL for Python, but if you learn the concepts and you're a programmer of any worth, you will be able to move from R to Python. Maybe take/read ISL, but do the labs in Python, that might be a fun way to go.

Lastly, to go along with ISL, "Elements of Statistical Learning" also by Hastie et al is available for free to dive deeper [3]

I also think this is one of the best entry level books, and the Stanford course looks good. This is what I recommend to people. In some ways, R is a very good match for this material, and you could move to python later.

Was there a reason Ng decided to teach the course in Octave rather than Python? The only time I've ever used .m files was during the course.

I appreciated the challenge of thinking from an array based language, but I felt it held me back from directly comparing my solutions to the tutorials to external sources. (Unless that even in of itself was the reason).

It's how some education works. But instruction has to be relevant to the skills and abilities of those receiving it, and has to address those things on which they have a deficit of understanding. If your approach to education is simply to stand at the front of the room and read a book to them, then you're not a very good educator.

It's a bit more complex than simply: give someone step-by-step instructions and they have the skill and/or knowledge. If they're still at the level where you have to explain everything, then that's a fairly low level of skill - and it's not clear that instruction on all subjects is appropriate to that level of skill... just as you wouldn't go up to an aeronautics engineer, ask for a step by step instruction guide on how to build a 747, and then call yourself an aeronautics engineer.

You might, if you had a very good memory, be able to build a 747 (waving aside the logistical difficulties of doing so) but you wouldn't understand why it worked, and you wouldn't know how to build any other aircraft. And if, instead, you asked her for a step-by-step guide on how to learn what she knew in a couple of months... Well, the answer would likely be that she knew a lot more than could be communicated in two months, and that you needed a higher level of understanding of physics and so on to ask a more refined question to which she would be able to give some sort of meaningful answer.

OP clearly has not even googled this. "python machine learning" pulls up many easily-accessible articles meant for beginners with no background in machine learning. The scikit-learn website is chock full of tutorials meant for beginners, with code examples!

How is someone with this little motivation going to learn something so complex? I want to allocate my time helping people who at least try first.

I dont think hacker news is a bad place to ask a question like this. The most helpful of the answers on a popular post are going to eliminate a lot of low quality content that your going to come across with Google.

I've seen the phrase "please provide" a lot. And I agree with you, I've never seen it actually used politely- it is always comes off like a strong demand. Maybe part of the problem is it is a stock phrase on exams.

I suspect that a lot of them might not be native English speakers (e.g. OP's username suggests that he is from India, where a lot of people speak English, but it is relatively few people's native tongue).

The main thing to understand though is that machine learning is a big topic, and you aren't going to be able to become an expert in two months.

Narrow down to a specific area, or type of problem, and focus on learning techniques and tools for that.

My guess is that there's something your working on or want to work on which is why want to learn. If that's the case, I'd recommend that read up a bit to give yourself a good understanding of the different kinds of problems out there (classification, prediction, anomaly detection, etc...), and different classes of tools available, and then pick a simple real world problem to try to tackle that is similar.

The best way to really learn is going to be getting hands on with a project and suffering through after you've read up a bit to understand the basics. Then when you hit something can't wrap your head around, search and read articles (or talk to someone with experience and expertise) until it clicks and you can proceed on working through.

By the end you'll have a good grasp of at least one technique, and be in a great place to keep learning more.

I don't have any relationship with Kaggle other than being a semi-active user, but I really dig what they've got going. For a step-by-step approach, start with their blog posts and work on their "Getting Started" competitions. Everything you need is there.

I created a github repo (https://github.com/apeeyush/machine-learning) to store and organize the codes I used in Kaggle contests (mainly knowledge contests). Recently, I have participated in some vision and CTR prediction contests as well but could not update them here since the code is still very hacky. Will really appreciate any contribution from the community.

It doesn't teach you ML with Python but it is extremely important to learn the ML concept without any programming language in mind. In addition to that course, any Google search will help you a lot. There are a lot of good explanations of ML concepts on various websites. If you don't understand how algorithms work, you will end up with copying and pasting example codes without knowing what you're doing. You need to imagine what you want to do in your head before you type any letter.

[2] -- Once you have the initial introduction, you can use Python to implement ML concepts. Fortunately, Python has a very easy to learn ML package: Scikit-learn (http://scikit-learn.org). It's free and is used by various companies such as Spotify and Evernote. Scikit-learn has a great documentation and many examples that will make the whole learning process exciting.

[3] -- After you feel comfortable with ML in Python, if you don't have datasets of your own, you can find a lot of datasets on UC Irvine's machine learning repository: http://archive.ics.uci.edu/ml/

The more you practice, the more comfortable you feel with playing with data. To cover a ML technique very well, play with every single parameter of the scikit-learn functions of that technique by using the same dataset. Also, always try to include visualization of the data (scikit-learn has examples with matplotlib to learn from how to do it) so you can actually see the changes of the implementation when parameters of the function change. This will make everything a lot easier.

I just started going down this path. I began with using audio analysis to do some machine learning. (Detecting a specific audio pattern very easily recognizable to humans). Can't get too specific about it, as it's under NDA. But I had a little under two weeks to get a prototype built that either proved or disproved it would be possible.

The very first thing I did was take a step back and understand the domain of the data I was working with, and what the best way to present it for machine learning would be. In my case, I had to understand what the best format for presenting my audio would be (slightly modified MFCCs), and what the best library would be to get my data in that format.

Next, I needed to build a data set of proper training data. This mean I had to manually build a (largish) data set that matched exactly what I was looking for. So I went and downloaded a bunch of example audio, and then manually went through it, tagging it into the two bins I was looking to differentiate against.

Once I had this, (which actually took much more time than the learning itself), I was ready to do the actual machine learning itself. I used Theano, and figuring out how to translate my dataset into a format digestible by Theano took another chunk of time. Once I had my data in the proper format for Theano, it came down to basically playing with how I presented my initial data to Theano, and then tweaking my gradient.

Finally, I was able to train and get a net that was about 80% right with my hypothesis. There were a few edge cases I hadn't anticipated that wouldn't necessarily work well, but it gave us enough confidence to go through with more machine learning for our project.

So, takeaway suggestions: find a real project, something you want to learn, and then just do it. Gather knowledge of your data, build a dataset, and test a hypothesis. Most of this isn't machine learning, it's mostly just moving and shaping data, and knowing what in your data is significant. The machine learning algorithms are really just a tiny piece of the whole picture. Good luck.

Machine learning is a pretty big field. The Coursera course is very good. It uses Octave not Python, but what you learn will be easy to transfer. It is mostly focused on neural networks. If you don't already know linear algebra you should probably learn that first.

I guess my point is that it's such a broad overview of all topics that fall under artificial intelligence that you don't get much of a good introduction to applying machine learning. But point taken, you're right, it is an introductory text.