Fighting hallucinations with 15 lines of code using Self-Correcting RAG

The recent advancement in Generative AI and release of ChatGPT shook the world to its core, inciting both excitement and fear for the future. The advancement in artificial intelligence and capabilities of LLMs have made headlines ever since. But, soon after its release, people began exposing its flaws. Often, the 175 billion parameter natural language processing model would fail provide its users with relevant responses. It frequently provided incorrect answers to questions even a kid could solve. Furthermore, it was much easier to trigger an incorrect response. For instance, if you asked the model “What is 3 + 2?” and the model responded with 5 as the answer, you could just tell the model it is 6 and the model would agree. Misinformation is another problem with these large language models(LLMs). Recently, Air Canada had to pay a hefty fine because its chatbot provided false information regarding its refund policy to its traveller. This phenomenon where a LLM generates nonsensical or inaccurate information to user query is called hallucination.

Read More

Imbalanced dataset in machine learning

Data is crucial to train machine learning(ML) models. An ML model is only as good as the data it feeds on. However, data used to train models may have several deformities, for example:

Read More

ResNet - Why residual units work?

In a convolutional neural network, convolutional layers learn hiearchical features. The lower layers learn low-level features like edges, corners, etc. and higher layers high level features like mouth, ears, tail,etc. Increasing the network depth, as demonstrated by InceptionNet, allows the network to learn better representations and improves the performance of the network. However, optimizing such deep networks is not that easy.

Read More

You're up and running!

Next you can update your site name, avatar and other options using the _config.yml file in the root of your repository (shown below).

Read More