Navigating AI Evaluation and Observability with Atin Sanyal
Impossible d'ajouter des articles
Désolé, nous ne sommes pas en mesure d'ajouter l'article car votre panier est déjà plein.
Veuillez réessayer plus tard
Veuillez réessayer plus tard
Échec de l’élimination de la liste d'envies.
Veuillez réessayer plus tard
Impossible de suivre le podcast
Impossible de ne plus suivre le podcast
-
Lu par :
-
De :
À propos de ce contenu audio
As GenAI tools have become more intelligent and robust, the reliability of their output has decreased. Enter Galileo: an AI reliability platform designed for GenAI applications.
Atin Sanyal — the Co-founder and CTO of Galileo — has a background building machine learning tech at Uber and Apple. One innovative technique that sets Galileo apart is ChainPoll — their hallucination detection methodology that uses consensus scoring and prompts the LLM to outline its step-by-step reasoning process.
In this episode, hosts Aaron Fulkerson and Mark Hinkle talk to Atin about:
- What evaluation agents are, and why they get smarter over time
- How Galileo helps enterprises evolve their own AI quality metrics
- Why data quality and confidential computing will become increasingly important to enterprises building AI systems
If you want to learn more about AI and how to benefit from it responsibly, visit opaque.co
Vous êtes membre Amazon Prime ?
Bénéficiez automatiquement de 2 livres audio offerts.Bonne écoute !
Aucun commentaire pour le moment