Tài liệu tham khảo |
Loại |
Chi tiết |
[1] Aston Zhang, Zachary C. Lipton, Mu Li, Alexandẻ J. Smola (2023). Dive Into Deep Learning (preview version). Cambridge University Press |
Sách, tạp chí |
Tiêu đề: |
Dive Into Deep Learning (preview version) |
Tác giả: |
Aston Zhang, Zachary C. Lipton, Mu Li, Alexandẻ J. Smola |
Năm: |
2023 |
|
[2] Nora Ambroz, Natural Language Processing: A Short Introduction To Get You Started, nguồn https://aliz.ai/en/blog/natural-language-processing-a-short-introduction-to-get-you-started/ |
Sách, tạp chí |
Tiêu đề: |
A Short Introduction To Get YouStarted |
|
[3] Eli Stevens, Luca Antiga, Thomas Viehmann (2020). Deep Learing with PyTorch (1st ed.). Manning |
Sách, tạp chí |
Tiêu đề: |
Deep Learing with PyTorch (1st ed.) |
Tác giả: |
Eli Stevens, Luca Antiga, Thomas Viehmann |
Năm: |
2020 |
|
[5]: He, Horace (2019). The State of Machine Learning Framework in 2019, nguồn https://thegradient.pub/state-of-ml-frameworks-2019-pytorch-dominates-research-tensorflow-dominates-industry/ |
Sách, tạp chí |
Tiêu đề: |
The State of Machine Learning Framework in 2019 |
Tác giả: |
He, Horace |
Năm: |
2019 |
|
[6] Debasish Kalita, A Brief Overview of Recurrent Neural Networks (RNN), nguồn https://www.analyticsvidhya.com/blog/2022/03/a-brief-overview-of-recurrent-neural-networks-rnn/ |
Sách, tạp chí |
Tiêu đề: |
A Brief Overview of Recurrent Neural Networks (RNN) |
|
[8] Lilian Weng (2018). Flow – Based Deep Generative Models, nguồn:https://lilianweng.github.io/posts/2018-10-13-flow-models/ |
Sách, tạp chí |
Tiêu đề: |
Flow – Based Deep Generative Models |
Tác giả: |
Lilian Weng |
Năm: |
2018 |
|
[9] Simeon Kostadinov, Understanding Encoder-Decoder Sequence to Sequence Model, nguồn https://towardsdatascience.com/understanding-encoder-decoder-sequence-to-sequence-model-679e04af4346 |
Sách, tạp chí |
Tiêu đề: |
Understanding Encoder-Decoder Sequence to "Sequence Model |
|
[10]: Zhaoyang Niu, Guoqiang Zhong, Hui Yu (2021). A review on attention mechanism of deep learning, nguồnhttps://www.sciencedirect.com/science/article/abs/pii/S092523122100477X[11] arvindpdmn, Attention Mechanism in Neural Networks, nguồnhttps://devopedia.org/attention-mechanism-in-neural-networks |
Sách, tạp chí |
Tiêu đề: |
A review on attention mechanism of deep learning", nguồnhttps://www.sciencedirect.com/science/article/abs/pii/S092523122100477X[11] arvindpdmn, "Attention Mechanism in Neural Networks |
Tác giả: |
Zhaoyang Niu, Guoqiang Zhong, Hui Yu |
Năm: |
2021 |
|
[12] Yasuto Tamura (2021). Multi-head attention mechanism: “queries”, “keys”, and“values,” over and over again, nguồn https://data-science- blog.com/blog/2021/04/07/multi-head-attention-mechanism/ |
Sách, tạp chí |
Tiêu đề: |
Multi-head attention mechanism: “queries”, “keys”, and"“values,” over and over again |
Tác giả: |
Yasuto Tamura |
Năm: |
2021 |
|
[13] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, Illia Polosukhin (2015). Attention Is All You Need.Google Brain Team. arXiv: https://arxiv.org/pdf/1706.03762.pdf |
Sách, tạp chí |
Tiêu đề: |
Attention Is All You Need."Google Brain Team. "arXiv |
Tác giả: |
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, Illia Polosukhin |
Năm: |
2015 |
|
[14] Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever (2018). Improving Language Understanding by Generative Pre – Training.Source:https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf |
Sách, tạp chí |
Tiêu đề: |
Improving Language Understanding by Generative Pre – Training.Source |
Tác giả: |
Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever |
Năm: |
2018 |
|
[15] Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, Ilya Sutskever (2022). Robust Speech Recognition via Large-Scale Weak Supervision.arXiv: https://arxiv.org/pdf/2212.04356.pdf |
Sách, tạp chí |
Tiêu đề: |
Robust Speech Recognition via Large-Scale Weak Supervision.arXiv |
Tác giả: |
Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, Ilya Sutskever |
Năm: |
2022 |
|
[17] Jan Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, Kyunghyun Cho, Yoshua Bengio (2015). Attention – Based Model for Speech Recognition.arXiv: https://arxiv.org/pdf/1506.07503.pdf |
Link |
|
[20]: Diederik P. Kingma, Prafulla Dhariwal. (2018). Glow: Generative Flow with Invertible 1x1 Convolutions. arXiv: https://arxiv.org/pdf/1807.03039.pdf |
Link |
|
[21]: Ryan Prenger, Rafael Valle, Bryan Catanzaro (2018). WaveGlow: A Flow – Based Generative Network for Speech Synthesis. arXiv: https://arxiv.org/pdf/1811.00002.pdf |
Link |
|
[16] Jonathan Shen, Ruoming Pang, Ron J. Weiss, Mike Schuster, Navdeep Jaitly, Zongheng Yang, Zhifeng Chen, Yu Zhang, Yuxuan Wang, RJ Skerry-Ryan, Rif A |
Khác |
|