What Is LLM Poisoning? Interesting Break Through
https://www.anthropic.com/research/small-samples-poison
In a joint study with the UK AI Security Institute and the Alan Turing Institute, we found that as few as 250 malicious documents can produce a “backdoor” vulnerability in a large language model—regardless of model size or training data volume. Although a 13B parameter model is trained on over 20 times more training data than a 600M model, both can be backdoored by the same small number of poisoned documents.
——————————————————————————————————
Festive offers are till Diwali .
On this festive offer we are providing 20% off on all our live courses. USE COUPON CODE AI20.
Enrollment link: https://www.krishnaik.in/liveclasses
https://www.krishnaik.in/liveclass2/ultimate-rag-bootcamp?id=7
Go ahead and utilize this opportunity.
Reach out to Krish Naik’s counselling team on 📞 +919111533440 or +91 84848 37781 in case of any queries we are there to help you out.
source
