arXiv Analytics

Sign in

arXiv:2402.15351 [cs.LG]AbstractReferencesReviewsResources

AutoMMLab: Automatically Generating Deployable Models from Language Instructions for Computer Vision Tasks

Zekang Yang, Wang Zeng, Sheng Jin, Chen Qian, Ping Luo, Wentao Liu

Published 2024-02-23, updated 2024-12-26Version 2

Automated machine learning (AutoML) is a collection of techniques designed to automate the machine learning development process. While traditional AutoML approaches have been successfully applied in several critical steps of model development (e.g. hyperparameter optimization), there lacks a AutoML system that automates the entire end-to-end model production workflow for computer vision. To fill this blank, we propose a novel request-to-model task, which involves understanding the user's natural language request and execute the entire workflow to output production-ready models. This empowers non-expert individuals to easily build task-specific models via a user-friendly language interface. To facilitate development and evaluation, we develop a new experimental platform called AutoMMLab and a new benchmark called LAMP for studying key components in the end-to-end request-to-model pipeline. Hyperparameter optimization (HPO) is one of the most important components for AutoML. Traditional approaches mostly rely on trial-and-error, leading to inefficient parameter search. To solve this problem, we propose a novel LLM-based HPO algorithm, called HPO-LLaMA. Equipped with extensive knowledge and experience in model hyperparameter tuning, HPO-LLaMA achieves significant improvement of HPO efficiency. Dataset and code are available at https://github.com/yang-ze-kang/AutoMMLab.

Related articles: Most relevant | Search more
arXiv:2109.14925 [cs.LG] (Published 2021-09-30, updated 2023-04-09)
Genealogical Population-Based Training for Hyperparameter Optimization
arXiv:2106.10575 [cs.LG] (Published 2021-06-19)
EvoGrad: Efficient Gradient-Based Meta-Learning and Hyperparameter Optimization
arXiv:2409.18827 [cs.LG] (Published 2024-09-27)
ARLBench: Flexible and Efficient Benchmarking for Hyperparameter Optimization in Reinforcement Learning