Épisodes

  • A History of NLP and Wisecube’s AI Journey
    Jun 3 2025

    In this episode, Vishnu and Alex reflect on Wisecube’s 8-year journey and over 15 years of experience in AI and NLP. They discuss pioneering search engines using TF-IDF to build knowledge graphs (Orpheus), addressing LLM reliability with Pythia, exploring key milestones in AI development, and the evolution of NLP. Topics include the Eliza effect, real-world healthcare and research applications, CAC, drug discovery, and Wisecube's recent acquisition by John Snow Labs. They explore the future of NLP and AI in healthcare.

    Alex Thomas's book, "𝗡𝗮𝘁𝘂𝗿𝗮𝗹 𝗟𝗮𝗻𝗴𝘂𝗮𝗴𝗲 𝗣𝗿𝗼𝗰𝗲𝘀𝘀𝗶𝗻𝗴 𝘄𝗶𝘁𝗵 𝗦𝗽𝗮𝗿𝗸 𝗡𝗟𝗣: 𝗟𝗲𝗮𝗿𝗻𝗶𝗻𝗴 𝘁𝗼 𝗨𝗻𝗱𝗲𝗿𝘀𝘁𝗮𝗻𝗱 𝗧𝗲𝘅𝘁 𝗮𝘁 𝗦𝗰𝗮𝗹𝗲": https://www.amazon.com/Natural-Language-Processing-Spark-NLP/dp/1492047767


    Timestamps

    00:00 Introduction and Personal Notes

    01:13 Wisecube is Now Part of John Snow Labs!

    02:15 History and Evolution of NLP

    03:27 Early Search Engine Projects

    07:55 CAC (Computer-Aided Coding) Healthcare Project

    18:05 Drug Discovery Research

    28:12 Knowledge Graphs and Orpheus/Pythia Projects

    35:51 Future Outlook and Conclusion


    Available on:

    • ⁠⁠⁠YouTube⁠⁠⁠: https://youtube.com/@WisecubeAI/podcasts

    • ⁠⁠⁠Apple Podcast⁠⁠⁠: https://apple.co/4kPMxZf

    • ⁠⁠⁠Spotify⁠⁠⁠: https://open.spotify.com/show/1nG58pwg2Dv6oAhCTzab55

    • ⁠⁠⁠Amazon Music⁠⁠⁠: https://bit.ly/4izpdO2


    Follow us:

    - John Snow Labs: https://www.johnsnowlabs.com/?utm_source=acquisition&utm_medium=link&utm_campaign=wisecube

    - LinkedIn: https://www.linkedin.com/company/wisecube/



    #AI #NLP #LLM #MachineLearning #KnowledgeGraphs #ArtificialIntelligence #DataScience #HealthcareAI #StartupJourney #AIResearch #DrugDiscovery #NaturalLanguageProcessing

    Afficher plus Afficher moins
    38 min
  • LLM Fine-Tuning: RLHF vs DPO and Beyond
    May 13 2025

    In this episode of Gradient Descent, we explore two competing approaches to fine-tuning LLMs: Reinforcement Learning with Human Feedback (RLHF) and Direct Preference Optimization (DPO). Dive into the mechanics of RLHF, its computational challenges, and how DPO simplifies the process by eliminating the need for a separate reward model. We also discuss supervised fine-tuning, emerging methods like Identity Preference Optimization (IPO) and Kahneman-Tversky Optimization (KTO), and their real-world applications in models like Llama 3 and Mistral. Learn practical LLM optimization strategies, including task modularization to boost performance without extensive fine-tuning.


    Timestamps:

    Intro - 0:00

    Overview of LLM Fine-Tuning - 00:48

    Deep Dive into RLHF - 02:46

    Supervised Fine-Tuning vs. RLHF - 10:38

    DPO and Other RLHF Alternatives - 14:43

    Real-World Applications in Frontier Models - 22:23

    Practical Tips for LLM Optimization - 25:18

    Closing Thoughts - 36:05


    References:

    [1] Training language models to follow instructions with human feedback https://arxiv.org/abs/2203.02155

    [2] Direct Preference Optimization: Your Language Model is Secretly a Reward Model https://arxiv.org/abs/2305.18290

    [3] Hugging Face Blog on DPO: Simplifying Alignment: From RLHF to Direct Preference Optimization (DPO) https://huggingface.co/blog/ariG23498/rlhf-to-dpo

    [4] Comparative Analysis: RLHF and DPO Compared https://crowdworks.blog/en/rlhf-and-dpo-compared/

    [5] YouTube Explanation: How to fine-tune LLMs directly without reinforcement learning https://www.youtube.com/watch?v=k2pD3k1485A


    Listen on:

    • Apple Podcasts:

    https://podcasts.apple.com/us/podcast/gradient-descent-podcast-about-ai-and-data/id1801323847

    • Spotify:

    https://open.spotify.com/show/1nG58pwg2Dv6oAhCTzab55

    • Amazon Music:

    https://music.amazon.com/podcasts/79f6ed45-ef49-4919-bebc-e746e0afe94c/gradient-descent---podcast-about-ai-and-data

    • YouTube: https://youtube.com/@WisecubeAI/podcasts


    Our solutions:

    - https://askpythia.ai/ - LLM Hallucination Detection Tool

    - https://www.wisecube.ai - Wisecube AI platform for large-scale biomedical knowledge analysis


    Follow us:

    - Pythia Website: https://askpythia.ai/

    - Wisecube Website: https://www.wisecube.ai

    - LinkedIn: https://www.linkedin.com/company/wisecube/

    - Facebook: https://www.facebook.com/wisecubeai

    - Twitter: https://x.com/wisecubeai

    - Reddit: https://www.reddit.com/r/pythia/

    - GitHub: https://github.com/wisecubeai


    #FineTuning #LLM #RLHF #AI #MachineLearning #AIDevelopment

    Afficher plus Afficher moins
    38 min
  • The Future of Prompt Engineering: Prompts to Programs
    Apr 29 2025

    Explore the evolution of prompt engineering in this episode of Gradient Descent. Manual prompt tuning — slow, brittle, and hard to scale — is giving way to DSPy, a framework that turns LLM prompting into a structured, programmable, and optimizable process.

    Learn how DSPy’s modular approach — with Signatures, Modules, and Optimizers — enables LLMs to tackle complex tasks like multi-hop reasoning and math problem solving, achieving accuracy comparable to much larger models. We also dive into real-world examples, optimization strategies, and why the future of prompting looks a lot more like programming.


    Listen to our podcast on these platforms:

    • YouTube: https://youtube.com/@WisecubeAI/podcasts

    • Apple Podcasts: https://apple.co/4kPMxZf

    • Spotify: https://open.spotify.com/show/1nG58pwg2Dv6oAhCTzab55

    • Amazon Music: https://bit.ly/4izpdO2


    Mentioned Materials:

    • DSPy Paper - https://arxiv.org/abs/2310.03714

    • DSPy official site - https://dspy.ai/

    • DSPy GitHub - https://github.com/stanfordnlp/dspy

    • LLM abstractions guide - https://www.twosigma.com/articles/a-guide-to-large-language-model-abstractions/


    Our solutions:

    - https://askpythia.ai/ - LLM Hallucination Detection Tool

    - https://www.wisecube.ai - Wisecube AI platform for large-scale biomedical knowledge analysis


    Follow us:

    - Pythia Website: https://askpythia.ai/

    - Wisecube Website: https://www.wisecube.ai

    - LinkedIn: https://www.linkedin.com/company/wisecube/

    - Facebook: https://www.facebook.com/wisecubeai

    - Twitter: https://x.com/wisecubeai

    - Reddit: https://www.reddit.com/r/pythia/

    - GitHub: https://github.com/wisecubeai


    #AI #PromptEngineering #DSPy #MachineLearning #LLM #ArtificialIntelligence #AIdevelopment

    Afficher plus Afficher moins
    36 min
  • Agentic AI – Hype or the Next Step in AI Evolution?
    Apr 12 2025

    Let’s dive into Agentic AI, guided by the "Cognitive Architectures for Language Agents" (CoALA) paper. What defines an agentic system? How does it plan, leverage memory, and execute tasks? We explore semantic, episodic, and procedural memory, discuss decision-making loops, and examine how agents integrate with external APIs (think LangGraph). Learn how AI tackles complex automation — from code generation to playing Minecraft — and why designing robust action spaces is key to scaling systems. We also touch on challenges like memory updates and the ethics of agentic AI. Get actionable insight…

    🔗 Links to the CoALA paper, LangGraph, and more in the description.

    🔔 Subscribe to stay updated with Gradient Descent!


    Listen on:

    • ⁠YouTube⁠: https://youtube.com/@WisecubeAI/podcasts

    • ⁠Apple Podcast⁠: https://apple.co/4kPMxZf

    • ⁠Spotify⁠: https://open.spotify.com/show/1nG58pwg2Dv6oAhCTzab55

    • ⁠Amazon Music⁠: https://bit.ly/4izpdO2


    Mentioned Materials:

    • Cognitive Architectures for Language Agents (CoALA) - https://arxiv.org/abs/2309.02427

    • Memory for agents - https://blog.langchain.dev/memory-for-agents/

    • LangChain - https://python.langchain.com/docs/introduction/

    • LangGraph - https://langchain-ai.github.io/langgraph/


    Our solutions:

    • https://askpythia.ai/ - LLM Hallucination Detection Tool
    • https://www.wisecube.ai - Wisecube AI platform can analyze millions of biomedical publications, clinical trials, protein and chemical databases.


    Follow us:

    - Pythia Website: https://askpythia.ai/

    - Wisecube Website: https://www.wisecube.ai

    - LinkedIn: https://www.linkedin.com/company/wisecube/

    - Facebook: https://www.facebook.com/wisecubeai

    - X: https://x.com/wisecubeai

    - Reddit: https://www.reddit.com/r/pythia/

    - GitHub: https://github.com/wisecubeai


    #AgenticAI #FutureOfAI #AIInnovation #ArtificialIntelligence #MachineLearning #DeepLearning #LLM

    Afficher plus Afficher moins
    41 min
  • LLM as a Judge: Can AI Evaluate Itself?
    Mar 22 2025
    In the second episode of Gradient Descent, Vishnu Vettrivel (CTO of Wisecube) and Alex Thomas (Principal Data Scientist) explore the innovative yet controversial idea of using LLMs to judge and evaluate other AI systems. They discuss the hidden human role in AI training, limitations of traditional benchmarks, automated evaluation strengths and weaknesses, and best practices for building reliable AI judgment systems.Timestamps:00:00 – Introduction & Context 01:00 – The Role of Humans in AI 03:58 – Why Is Evaluating LLMs So Difficult? 09:00 – Pros and Cons of LLM-as-a-Judge 14:30 – How to Make LLM-as-a-Judge More Reliable? 19:30 – Trust and Reliability Issues 25:00 – The Future of LLM-as-a-Judge 30:00 – Final Thoughts and Takeaways Listen on:• ⁠YouTube⁠: https://youtube.com/@WisecubeAI/podcasts• ⁠Apple Podcast⁠: https://apple.co/4kPMxZf• ⁠Spotify⁠: https://open.spotify.com/show/1nG58pwg2Dv6oAhCTzab55• ⁠Amazon Music⁠: https://bit.ly/4izpdO2 Our solutions: • https://askpythia.ai/ - ⁠⁠LLM Hallucination Detection Tool⁠⁠ • https://www.wisecube.ai - ⁠⁠Wisecube AI⁠⁠ platform for large-scale biomedical knowledge analysisFollow us: • ⁠Pythia Website⁠: www.askpythia.ai• ⁠Wisecube Website⁠: www.wisecube.ai• ⁠Linkedin⁠: www.linkedin.com/company/wisecube• ⁠Facebook⁠: www.facebook.com/wisecubeai• ⁠Reddit⁠: www.reddit.com/r/pythia/Mentioned Materials:- Best Practices for LLM-as-a-Judge: https://www.databricks.com/blog/LLM-auto-eval-best-practices-RAG - LLMs-as-Judges: A Comprehensive Survey on LLM-based Evaluation Methods: https://arxiv.org/pdf/2412.05579v2- Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena: https://arxiv.org/abs/2306.05685- Guide to LLM-as-a-Judge: https://www.evidentlyai.com/llm-guide/llm-as-a-judge - Preference Leakage: A Contamination Problem in LLM-as-a-Judge: https://arxiv.org/pdf/2502.01534- Large Language Models Are Not Fair Evaluators: https://arxiv.org/pdf/2305.17926- Is LLM-as-a-Judge Robust? Investigating Universal Adversarial Attacks on Zero-shot LLM Assessment: https://arxiv.org/pdf/2402.14016v2- Optimization-based Prompt Injection Attack to LLM-as-a-Judge: https://arxiv.org/pdf/2403.17710v4- AWS Bedrock: Model Evaluation: https://aws.amazon.com/blogs/machine-learning/llm-as-a-judge-on-amazon-bedrock-model-evaluation/ - Hugging Face: LLM Judge Cookbook: https://huggingface.co/learn/cookbook/en/llm_judge
    Afficher plus Afficher moins
    32 min
  • AI Scaling Laws, DeepSeek’s Cost Efficiency & The Future of AI Training
    Mar 6 2025
    In this first episode of Gradient Descent, hosts Vishnu Vettrivel (CTO of Wisecube AI) and Alex Thomas (Principal Data Scientist) discuss the rapid evolution of AI, the breakthroughs in LLMs, and the role of Natural Language Processing in shaping the future of artificial intelligence. They also share their experiences in AI development and explain why this podcast differs from other AI discussions.Chapters: 00:00 – Introduction 01:56 – DeepSeek Overview 02:55 – Scaling Laws and Model Performance 04:36 – Peak Data: Are we running out of quality training data? 08:10 – Industry reaction to DeepSeek 09:05 – Jevons' Paradox: Why cheaper AI can drive more demand 11:04 – Supervised Fine-Tuning vs Reinforcement Learning (RLHF) 14:49 – Why Reinforcement Learning helps AI models generalize 20:29 – Distillation and Training Efficiency 25:01 – AI safety concerns: Toxicity, bias, and censorship 30:25 – Future Trends in LLMs: Cheaper, more specialized AI models? 37:30 – Final thoughts and upcoming topics Listen on:• YouTube: https://youtube.com/@WisecubeAI/podcasts• Apple Podcast: https://apple.co/4kPMxZf• Spotify: https://open.spotify.com/show/1nG58pwg2Dv6oAhCTzab55• Amazon Music: https://bit.ly/4izpdO2 Our solutions: • https://askpythia.ai/ - ⁠LLM Hallucination Detection Tool⁠ • https://www.wisecube.ai - ⁠Wisecube AI⁠ platform for large-scale biomedical knowledge analysisFollow us: • Pythia Website: www.askpythia.ai• Wisecube Website: www.wisecube.ai• Linkedin: www.linkedin.com/company/wisecube• Facebook: www.facebook.com/wisecubeai• Reddit: www.reddit.com/r/pythia/Mentioned Materials: - Jevons’ Paradox: https://en.wikipedia.org/wiki/Jevons_paradox - Scaling Laws for Neural Language Models: https://arxiv.org/abs/2001.08361- Distilling the Knowledge in a Neural Network: https://arxiv.org/abs/1503.02531 - SFT Memorizes, RL Generalizes: A Comparative Study of Foundation Model Post-training: https://arxiv.org/abs/2501.17161 - DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning: https://arxiv.org/abs/2501.12948 - Reinforcement Learning: An Introduction (Sutton & Barto): https://web.stanford.edu/class/psych209/Readings/SuttonBartoIPRLBook2ndEd.pdf
    Afficher plus Afficher moins
    40 min