I am delighted to have received a President’s Award for input on data science from outgoing Institute and Faculty of Actuaries President Marjorie Ngwenya, FIA at yesterday’s AGM at Staple Inn in London.
The IFoA is a tremendously vibrant organisation and I believe IFoA and other actuaries have an important role to play in helping businesses and organisations make the most from the torrents of data becoming available, whilst also helping protect consumers from unethical use of such data. In particular, I am very pleased that the IFoA is collaborating with the Royal Statistical Society in the vital area of the ethical use of data in data science. (For example a joint event was held earlier this month on the Industrialisation and Professionalisation of Data Science)
While looking at a (relatively small, 1.7 million records) big data example of New York Yellow Cab taxi trips, I am coming to the conclusion that the best place (if as we do you are using Microsoft tools) for initial analysis, including the all important first step of finding outliers/errors, is Azure Machine Learning Studio (Azure ML, as opposed to Excel, Power BI or bespoke analysis using e.g. Kendo UI).
Why Azure ML for initial analysis?
- It loads data quite quickly (e.g. just over a minute to import almost 2 million records from an Azure SQL database). This is currently much quicker than Power BI.
- It automatically produces histograms and box plots of numeric fields (see the images below, and above, where the field FareAmount has been selected). We can tell immediately from the box plot that there are several outliers (and in fact probable errors that will need to be either corrected or removed, in that FareAmount should not have negative values!).
“Azure Machine Learning Studio: the best place for initial data analysis?” Read More
The above screenshot shows an initial analysis (in Microsoft Power BI) of 1,723,099 records of New York taxi trip records uploaded to the cloud. The top chart shows a scatter plot of Trip Distance in miles against the Total Fare Amount (in US $). This useful chart shows straightaway that there are some outliers in the data (e.g. some trips cost over $1,000 despite being only for short distances). These records are almost certainly errors (where e.g. the fare was entered with the decimal point in the wrong place, e.g. $1000.00 instead of $10.00) and should be corrected or removed. Similar errors in the Trip Distance fields had already been removed in that 2 records had implausible distance values (e.g. 300,833 miles for a total fare of $14.16, and 1,666 miles for a total fare of $10.30).
In order to analyse big data, it often needs to be moved from its original sources (e.g. separate csv or txt files, or a stream) to somewhere where it can be collated and processed (e.g. an online database, or Microsoft PowerBI, or an xdf, extensible data format, file that can be analysed by Microsoft R Server).
“Uploading Big Data: it’s very different from normal data” Read More
As a “proper” programmer, used to programming in heavy duty, compiled languages like C# (and before that C++ and C), my reaction on discovering during my Data Science journey that R and Python are heavily used by data scientists was: why??
Why would anyone use an interpreted language, which is therefore bound to be slower, and why would anyone go to the trouble of using yet another language when there are perfectly good compiled languages around like C#, F# and VB.net?
The answer seems to be partly that R and Python are free (open source), and also because R and Python have excellent visualisation tools, which the other languages currently lack.
“Why do data scientists use R and Python, as opposed to other languages like C#?” Read More
I mentioned a couple of days ago (here) that I had completed the 10 courses required for the Microsoft Professional Program for Data Science. I was delighted to receive confirmation earlier today from Microsoft via a nice certificate (see pic above), or you can view it here.