Fighting hallucinations with 15 lines of code using Self-Correcting RAG
The recent advancement in Generative AI and release of ChatGPT shook the world to its core, inciting both excitement and fear for the future. The advancement in artificial intelligence and capabilities of LLMs have made headlines ever since. But, soon after its release, people began exposing its flaws. Often, the 175 billion parameter natural language processing model would fail provide its users with relevant responses. It frequently provided incorrect answers to questions even a kid could solve. Furthermore, it was much easier to trigger an incorrect response. For instance, if you asked the model “What is 3 + 2?” and the model responded with 5 as the answer, you could just tell the model it is 6 and the model would agree. Misinformation is another problem with these large language models(LLMs). Recently, Air Canada had to pay a hefty fine because its chatbot provided false information regarding its refund policy to its traveller. This phenomenon where a LLM generates nonsensical or inaccurate information to user query is called hallucination.