Predicting the Future from the Past: Sequential RNN Stuff
Impossible d'ajouter des articles
Échec de l’élimination de la liste d'envies.
Impossible de suivre le podcast
Impossible de ne plus suivre le podcast
-
Lu par :
-
De :
À propos de ce contenu audio
This text is an excerpt from the "Dive into Deep Learning" book, specifically focusing on the processing of sequential data. The authors introduce the challenges of working with data that occurs in a specific order, like time series or text, and how these sequences cannot be treated as independent observations. They delve into autoregressive models, where future values are predicted based on past values, and highlight the common problem of error accumulation when predicting further into the future. The text discusses the concept of Markov models, where only a limited history is needed to predict future events, as well as the importance of understanding the causal structure of the data. The excerpt then provides a practical example of using linear regression for autoregressive modeling on synthetic time series data and demonstrates the limitations of simple models for long-term prediction.
Read more: https://d2l.ai/chapter_recurrent-neural-networks/sequence.html
Vous êtes membre Amazon Prime ?
Bénéficiez automatiquement de 2 livres audio offerts.Bonne écoute !