OpenAI, LLM – Q*, AGI, Self Taught Reasoning, Optimizing, Synthetic data


Trying to catch up on all the news around what happened at OpenAI. There are a lot of rumours around Q*, AGI, Self Taught Reasoning and Optimizing, Synthetic data and related algorithms.

It all revolves around LLM’s reaching a limit due to lack of training data. Synthetic data can be a possible solution, something I have been talking about for a while. Good quality synthetic data can improve models. If the constraints, parameters, distribution can be defined by a combination of human and algorithmic input, we can create better quality training data for algorithms. Add to that automated self-learning models self-generating training data to improve further in a continuous loop of trial and error, you have the potential for creating AGI type systems in the future.

Will be interesting to see how much of this will be realized in the next 5-10 years and its potential impact on the global economy across all industries. Given the impact there will be lot of debate between doomers vs accelerationists. I am all in favour of accelerationism with sensible guards. Innovation and technological progress has rarely been stopped and eventually created disruptive improvements for society.

Everything I have learned in the last 5 years looks like basic stuff compared to what is being built now, algorithmically and technologically, at an ever rapid pace. All these topics require lot of theory and modelling before getting a clear idea what it is and how to use it for business solutions.

Giving some papers links below on what is being discussed. There is a lot more information available online about what else is being discussed. I think the next few years will be very exciting for AI.


STaR: Self-Taught Reasoner Bootstrapping Reasoning With Reasoning –


Tree of Thoughts: Deliberate Problem Solving with Large Language Models –

Good article on synthetic data –