I am delighted to have received a President’s Award for input on data science from outgoing Institute and Faculty of Actuaries President Marjorie Ngwenya, FIA at yesterday’s AGM at Staple Inn in London.
The IFoA is a tremendously vibrant organisation and I believe IFoA and other actuaries have an important role to play in helping businesses and organisations make the most from the torrents of data becoming available, whilst also helping protect consumers from unethical use of such data. In particular, I am very pleased that the IFoA is collaborating with the Royal Statistical Society in the vital area of the ethical use of data in data science. (For example a joint event was held earlier this month on the Industrialisation and Professionalisation of Data Science)
#The2040Economy : in my opinion, every business, charity, school, government department, etc needs to harness appropriate data for their organisation as soon as possible and extract value from it, or it will be overtaken by competitors who do.
And we (society across the whole planet) need to prepare for a new economy in which very few work.
Think about it: how will your children (or grandchildren) cope in a world where the economy doesn’t need them to work? How will society as a whole finance support for them?
While looking at a (relatively small, 1.7 million records) big data example of New York Yellow Cab taxi trips, I am coming to the conclusion that the best place (if as we do you are using Microsoft tools) for initial analysis, including the all important first step of finding outliers/errors, is Azure Machine Learning Studio (Azure ML, as opposed to Excel, Power BI or bespoke analysis using e.g. Kendo UI).
Why Azure ML for initial analysis?
- It loads data quite quickly (e.g. just over a minute to import almost 2 million records from an Azure SQL database). This is currently much quicker than Power BI.
- It automatically produces histograms and box plots of numeric fields (see the images below, and above, where the field FareAmount has been selected). We can tell immediately from the box plot that there are several outliers (and in fact probable errors that will need to be either corrected or removed, in that FareAmount should not have negative values!).
The above screenshot shows an initial analysis (in Microsoft Power BI) of 1,723,099 records of New York taxi trip records uploaded to the cloud. The top chart shows a scatter plot of Trip Distance in miles against the Total Fare Amount (in US $). This useful chart shows straightaway that there are some outliers in the data (e.g. some trips cost over $1,000 despite being only for short distances). These records are almost certainly errors (where e.g. the fare was entered with the decimal point in the wrong place, e.g. $1000.00 instead of $10.00) and should be corrected or removed. Similar errors in the Trip Distance fields had already been removed in that 2 records had implausible distance values (e.g. 300,833 miles for a total fare of $14.16, and 1,666 miles for a total fare of $10.30).
In order to analyse big data, it often needs to be moved from its original sources (e.g. separate csv or txt files, or a stream) to somewhere where it can be collated and processed (e.g. an online database, or Microsoft PowerBI, or an xdf, extensible data format, file that can be analysed by Microsoft R Server).
I had occasion to look through the recent accounts of a sample of UK charities yesterday and was quite surprised to find that:
- some charities (e.g. Blind Veterans UK and NSPCC) still seem to be exposed to significant financial risk from their defined benefit (DB) pension plans. (See the chart above – the three charities at the end don’t seem to have DB pension plans or have immunised themselves against this risk).
- the cost of raising funds seems to vary quite a lot across the charities I looked at, ranging from almost a quarter (24%) of income (Woodland Trust), to about a fifth of that (5%, Barnados). (See chart below).