While looking at a (relatively small, 1.7 million records) big data example of New York Yellow Cab taxi trips, I am coming to the conclusion that the best place (if as we do you are using Microsoft tools) for initial analysis, including the all important first step of finding outliers/errors, is Azure Machine Learning Studio (Azure ML, as opposed to Excel, Power BI or bespoke analysis using e.g. Kendo UI).
Why Azure ML for initial analysis?
- It loads data quite quickly (e.g. just over a minute to import almost 2 million records from an Azure SQL database). This is currently much quicker than Power BI.
- It automatically produces histograms and box plots of numeric fields (see the images below, and above, where the field FareAmount has been selected). We can tell immediately from the box plot that there are several outliers (and in fact probable errors that will need to be either corrected or removed, in that FareAmount should not have negative values!).
The above screenshot shows an initial analysis (in Microsoft Power BI) of 1,723,099 records of New York taxi trip records uploaded to the cloud. The top chart shows a scatter plot of Trip Distance in miles against the Total Fare Amount (in US $). This useful chart shows straightaway that there are some outliers in the data (e.g. some trips cost over $1,000 despite being only for short distances). These records are almost certainly errors (where e.g. the fare was entered with the decimal point in the wrong place, e.g. $1000.00 instead of $10.00) and should be corrected or removed. Similar errors in the Trip Distance fields had already been removed in that 2 records had implausible distance values (e.g. 300,833 miles for a total fare of $14.16, and 1,666 miles for a total fare of $10.30).
In order to analyse big data, it often needs to be moved from its original sources (e.g. separate csv or txt files, or a stream) to somewhere where it can be collated and processed (e.g. an online database, or Microsoft PowerBI, or an xdf, extensible data format, file that can be analysed by Microsoft R Server).
Building a corporate dashboard so that you have key management information at a glance
(This article was first posted on 17 June 2017 on a different blog site, but migrated here 05 Feb 2018).
I have recently been building some corporate dashboards (as recommended by Daniel Priestley in his best selling book “24 Assets: Create a digital, scalable, valuable and fun business that will thrive in a fast changing world”). From chapter 15 of the book:
A key asset is a dashboard that allows the team to see how the business is performing. Carefully select some of the metrics that drive performance and make sure they show up prominently on your dashboard. You might select metrics like cash at bank, payments collected, expected invoices, revenue per employee or monthly users; the general rule is that whatever you measure will improve.
Accessing your valuable and key data
Dashboards need data, and this data will almost certainly need to come from a variety of sources in your organisation. There are lots of different ways of exposing your data sources so that the key information can be pulled into your dashboard. I reviewed several different options (including direct connections to databases, WebApi or MVC from websites and OData). My conclusion was that OData seemed the best current approach. Your data is valuable, so whatever method you use needs to be secure (i.e. with access protected via encryption and passwords) and you can do this with OData (and the other methods I have mentioned too). (Contact us if you need help with this. )