-
Course

How LLMs Understand & Generate Human Language
Published by Pearson (October 4, 2024)
ISBN-13: 9780135414354
Product Information
Generative language models, such as ChatGPT and Microsoft Bing, are becoming a daily tool for a lot of us, but these models remain black boxes to many. How does ChatGPT know which word to output next? How does it understand the meaning of the text you prompt it with? Everyone, from those who have never once interacted with a chatbot, to those who do so regularly, can benefit from a basic understanding of how these language models function. How LLMs Understand & Generate Human Language answers some of your fundamental questions about how generative AI works.
In this course, you will learn about word embeddings: not only how they are used in these models, but also how they can be leveraged to parse large amounts of textual information utilizing concepts such as vector storage and retrieval augmented generation. It is important to understand how these models work, so you know both what they are capable of and where their limitations lie.
Lesson 1: Introduction to LLMs and Generative AI
Lesson 2: Word Embeddings
Lesson 3: Word Embeddings in Generative Language Models
Lesson 4: Other Use Cases for Embeddings
Kate Harwood is currently working with the New York Times' R&D team to integrate state-of-the-art large language models into the Times' reporting and products. Kate graduated from Columbia University with a Masters in Computer Science: Machine Learning. Her primary focus is on Natural Language Processing and Ethical AI. In the past, Kate has worked on Misinformation Detection systems with the Columbia NLP Lab and researched techniques to mitigate bias in word embedding models. Prior to her work in NLP, Kate was a software engineer on the Image Search team at Google.