Not really. We're talking about consuming SSRS reports vs using the visualisation tools. No one's suggesting handing R or Python to the people who use the reports/visualisation tools

When you say things like, "SSRS will be dead and GGPlot and other tools will take over." Then you are saying that both are competing and one will replace the other in the same problem they are both trying to solve. This is simply not true in the slightest.

Thanks Gail. I agree with you. And will definitely follow this statement. The reason I bring up this point is that I am going to join a new organization next month. And there, I will get a chance to learn C#. But I was also excited about R/Python also. I have educated some guys on T-SQL skills in last few months. And most of them didn't have any knowledge about SQL before they started their one to one classes with me. And they were also learning python/R in parallel to get job as Data Scientists. So, I was under impression that where the area of learning is expanding itself. Is it really easy to learn SQL for 2-3 months and get a job as Data Scientists with good compensation? Should I also start learning these languages also? And there are around 3 questions that I see in Question of the day are related to R language.

We are doing R questions once a week as they seem to be popular and they help me learn as well

R and Python, really the Data Science and Machine Learning, interest is expanding because there is lots of publishing and writing about how companies are starting to use these technologies more. Whether this is useful and they will continue remains to be seen, but if you have an interest, learn them. Learning is good and keeps your brain fresh. It's a skill that you need.

I agree with Jeff that becoming better at SQL is important. Much of the ML/AI stuff still involves moving data around, cleaning it, etc., and SQL does a great job of that, but if you want variety, and your company uses other languages, splitting your learning is fine.

There is no should. You must decide what you are interested in and what presents opportunities for you

Thanks Gail. I agree with you. And will definitely follow this statement. The reason I bring up this point is that I am going to join a new organization next month. And there, I will get a chance to learn C#. But I was also excited about R/Python also. I have educated some guys on T-SQL skills in last few months. And most of them didn't have any knowledge about SQL before they started their one to one classes with me. And they were also learning python/R in parallel to get job as Data Scientists. So, I was under impression that where the area of learning is expanding itself. Is it really easy to learn SQL for 2-3 months and get a job as Data Scientists with good compensation? Should I also start learning these languages also? And there are around 3 questions that I see in Question of the day are related to R language.

We are doing R questions once a week as they seem to be popular and they help me learn as well

R and Python, really the Data Science and Machine Learning, interest is expanding because there is lots of publishing and writing about how companies are starting to use these technologies more. Whether this is useful and they will continue remains to be seen, but if you have an interest, learn them. Learning is good and keeps your brain fresh. It's a skill that you need.

I agree with Jeff that becoming better at SQL is important. Much of the ML/AI stuff still involves moving data around, cleaning it, etc., and SQL does a great job of that, but if you want variety, and your company uses other languages, splitting your learning is fine.

There is no should. You must decide what you are interested in and what presents opportunities for you

Proper name for a book about all the data scientist tools:"Blunt tools for dummies".

If you think about it - >90% of the operations they do is a waste of time and resources.Because exactly the same data analysing operations on the same sets of data (except for a small portion of newly added data, which usually does not exceed several percents of the whole amount of analysed data).

A smart solution would analyse only the newly arrived data and summarise the aggregations for the new set with the ones collected previously.

But that would make the big data look not so big, and that must be the problem.With all the petabytes gone - what do you have to brag about?