GET THE FULL COURSE, Machine Learning, Data Science, and Generative AI with Python at:

https://www.udemy.com/course/data-science-and-machine-learning-with-python-hands-on/?referralCode=C6B705087054C363CBEB

OpenAI released an exciting new fine-tuning API just last night, allowing you to train GPT 3.5 Turbo with your own conversations! This offers a much more scalable, robust way to customize GPT that goes way beyond prompt engineering. And, you can query your fine-tuned GPT 3.5 model using the Chat Completions API or the OpenAI Playground to interact with it just like you would with ChatGPT.

After a few slides covering how the new fine tuning API works, we dive right into a demo of using it. We got our hands on the text of every script from Star Trek: The Next Generation, pre-processed it into the format OpenAI expects for fine tuning, and fine-tuned GPT 3.5 with every line of dialog Data ever uttered – in context of who was speaking with him.

We’ll then have some conversations with our simulated Data, and compare how it does to the non-fine-tuned GPT 3.5 model.

We’ve created a real AI of a fictional AI!

To learn more, check out our 18-hour comprehensive course on ML, Data Science, and Generative AI at https://www.udemy.com/course/data-science-and-machine-learning-with-python-hands-on/?referralCode=C6B705087054C363CBEB

source