Discover easy methods to arrange an efficient MLflow environment to trace your experiments, compare and select one of the best model for deployment
Training and fine-tuning various models is a basic task for each computer vision researcher. Even for simple ones, we do a hyper-parameter search to search out the optimal way of coaching the model over our custom dataset. Data augmentation techniques (which include many various options already), the alternative of optimizer, learning rate, and the model itself. Is it one of the best architecture for my case? Should I add more layers, change the architecture, and lots of more questions will wait to be asked and searched?
While trying to find a solution to all these questions, I used to avoid wasting the model training process log files and output checkpoints in several folders in my local, change the output directory name each time I ran a training, and compare the ultimate metrics manually one-by-one. Tackling the experiment-tracking process in such a manual way has many disadvantages: it’s old-fashioned, time and energy-consuming, and susceptible to errors.
On this blog post, I’ll show you easy methods to use MLflow, among the finest tools to trace your experiment, allowing you to log whatever information you wish, visualize and compare different training experiments you may have completed, and judge which training is the optimal alternative in a user- (and eyes-) friendly environment!