Automating Scientific Discovery: ScienceAgentBench
Impossible d'ajouter des articles
Échec de l’élimination de la liste d'envies.
Impossible de suivre le podcast
Impossible de ne plus suivre le podcast
-
Lu par :
-
De :
À propos de ce contenu audio
Introducing, ScienceAgentBench, a new benchmark for evaluating language agents designed to automate scientific discovery. The benchmark comprises 102 tasks extracted from 44 peer-reviewed publications across four disciplines, encompassing essential tasks in a data-driven scientific workflow such as model development, data analysis, and visualization. To ensure scientific authenticity and real-world relevance, the tasks were validated by nine subject matter experts. The paper presents an array of evaluation metrics for assessing program execution, results, and costs, including a rubric-based approach for fine-grained evaluation. Through comprehensive experiments on five LLMs and three frameworks, the study found that the best-performing agent, Claude-3.5-Sonnet with self-debug, could only solve 34.3% of the tasks using expert-provided knowledge. These findings highlight the limitations of current language agents in fully automating scientific discovery, emphasizing the need for more rigorous assessment and future research on improving their capabilities for data processing and utilizing expert knowledge.
Read the paper: https://arxiv.org/pdf/2410.05080
Vous êtes membre Amazon Prime ?
Bénéficiez automatiquement de 2 livres audio offerts.Bonne écoute !