Getting machine learning experiments right — introduction to cross-validation


Online Workshop by COMBINE fellows
Friday, September 25th, 1 – 3pm

Leading Fellows: Renee Chou, Jason Fan, Bilal Moiz, Riya Samanta

Machine learning models have powerful applications across numerous disciplines and can provide valuable insights into your work. But how do you measure your model’s performance and ensure that you are getting the most out of your machine learning experiments?
Proper model evaluation is crucial to not only get the most out of your data, but also for publishing your results and convincing your peers that your results are robust and believable. However, model evaluation can be challenging, especially if you’re not familiar with it.

In this workshop, we will delve into model evaluation and highlight a specific validation evaluation technique: nested cross-validation. Join our peer-to-peer workshop to learn more about nested cross-validation and gain hands-on experience.
This workshop will consist of:

  • a short lecture to familiarize attendees with the model evaluation workflow and cross-validation concepts
  • an in-depth hands-on tutorial over Jupyter Notebook (a logged-in Google account will be needed)

Students from all disciplines and backgrounds are welcome, although some previous familiarity with the machine learning workflow will be very helpful.

Tutorial Slides