In the summer of 2020 I attended Machine Learning for Decision Making, a ten week immersive online executive program run by Imperial College Business School – a course targeted at data/tech professionals from all industries wanting to learn the tools and techniques for applying machine learning to business scenarios.

With courses of this kind now available from a number of top academic institutions, I thought it would be worth sharing my personal experience for the benefit of others who may be considering this kind of professional development.

Overall, I found the course to be rich in content, well structured, more difficult than I expected and requiring up to 6 hours per week of dedicated study/work.

The course was a combination of recorded video lectures by a professor and live sessions from the Learning Facilitators. The former set the foundations covering the theoretical/mathematical angle, the latter provided significant added value with practical use cases from the real world.

I would recommend the course to people that have a strong background in either probability theory or programming. If you lack knowledge in both, unless you can study full time, you may find it overwhelming. Here are some references, good as pre-course material and as a course companion:

Structure & Content

The course content was organised in 10 sections, one per week, with the last week focused on a final project.

The first part of the course was mainly theoretical and focused on understanding in what context machine learning can be used. Specifically, how to evaluate if the setting is deterministic or probabilistic, for machine learning works well only in the second case. 

In the second part of the course, various machine learning algorithms were explained mathematically and, crucially, in terms of  how to apply them in practice, using Python, to various business scenarios. A number of supervised and unsupervised learning analytical techniques were covered, such as classification, regression and clustering.

Instructors and Live Sessions

The course was run by a Professor and two Learning Facilitators.

Every week, the Professor released recorded video material on a new topic and the Learning Facilitators would run two live sessions to go over the material and answer questions.

In addition, the Professor ran 2 live sessions, the first one half way through the course, recapping the first 6 weeks, and the second one at the end, comparing the various algorithms discussed.

In case one could not attend live sessions or had any questions, a ticket could be opened and typically one of the Learning Facilitator would respond within 24 hours.

Cohort

It was good to see many people joining the course  from all over the world and from different industries (banking, IT, Healthcare, Education, Mining).

We presented ourselves in a few sentences and most of the people shared the same expectation from the course: how to apply machine learning to real business cases.

Each week there were interactive activities, allowing the cohort to share their thoughts on the topic addressed that week.

Some interesting machine learning applications suggested by the cohort were a social distancing detection algorithm and a chat bot to improve customer service.

Assignments & Final Project

The second part of the course required the submission of weekly assignments: one in Python and one more theoretical applying some probability theory. 

For me, this meant a doubling of the study time from 2-3 hours to 4-6 hours per week.

I found the Python work the most interesting, the theoretical exercises a bit long and repetitive… as a programmer, I trust the machine to do the maths! 

The last week of the course was all about the final project.

We were given a “hotel bookings” dataset and we had to predict if a reservation was going to be cancelled or not. 

This is a useful prediction allowing hotels to make informed decisions on how to mitigate the impact of cancellations, for example using the the overbooking practice.

We had to train different machine learning models (Decision Tree, Naïve Bayes and KNN), compare the accuracy and explain why one performed better than another. The tricky part was to find out that some input variables were correlated and therefore causing models to respond in certain ways. The gist of the assignment was that data preparation and data analysis of the input variables is a crucial step for good modelling.

a model on bad data, meaning data that is not shaped correctly even if it contains the correct info, is not a reliable model

Some interesting results were that cancellations happen more often with customers paying a deposit and for group bookings.

Next Steps

At the end of the course I feel I know a lot more on how a machine learning project should be executed, the stages involved (analysis, transformation, model evaluation).

For all the computational cleverness of Machine Learning, I learnt to appreciate how important it is to  first assess its applicability.  To give justice to this critical lesson from the course, I wrote a separate, more technical post to answer the question: is machine learning always feasible?.

Finally, having looked at the many examples from different industries during the live sessions, I have confidence that we can apply correct probabilistic assumptions to create reliable machine learning models within my company’s domain.