Mar 4, 2026

Moving towards purpose driven AI

When ChatGPT was launched, it was the very first time most of us interacted with AI. We talked with intelligence using a prompt. The internet was flooded with the perfect prompt to get results. That model looked something like this - 

Prompt <=> AI


Pretty soon everyone got past the world model embedded within the LLM and we needed to provide more context for AI to use intelligence for our specific problems. That was the start of RAG architectures and a wave of startups solving for it.

Prompt <=> AI <=> RAG

Context window became an issue. Bad context provided bad outcomes. We now needed intelligence in creating context as well. That is when RAG's were AI enabled so applications can use external intelligence to generate context for feeding into AI for its tasks.

Prompt <=> AI <=> [AI + RAG]

Context on its own was not enough. We needed to change things. That is when tools were introduce and started the entire agentic AI wave of startups. An agentic AI is - 

AI Agent = Task <=> AI <=> [AI + RAG] + [AI + Tools]

OpenClaw took this architecture further and demonstrated was was possible when an AI agent can do both upgrade itself and interact with other agents with unlimited access to resources.

Task <=> [Agent] <=> [Agent] ... = [AI System]

This architecture is mostly task based. Reasoning abilities are embedded with individual AI LLMs but not in the systems yet. Extrapolating this to a purpose driven AI will requires a system of agents that interact with everything with a larger purpose or goal attached to everything. Which means everything, every agent does helps towards that purpose.

Purpose/Goal <=> [AI System] <=> [AI System] <=> [Agents]

While reasoning operates for minutes and hours, a purpose driven AI will be able to operate for days, months and may be even years. It does not only execute tasks, it creates them.

For example, lets say Pepsi determines that it wants to position itself as the healthiest drink on the planet. That is its purpose. The AI system can then over months and years drive creating content, plan events, may be suppress opposing views, fund supporting research, amplify related stories, highlight clean manufacturing, and continue doing everything with one underlying purpose. As long as that purpose is reinforced into everything it does, it will drive in some subtle way all tasks it performs towards that purpose. It will autonomously generate tasks and learn from their success and failures.

An expanding world model of foundation models will expand the capabilities of what these purpose driven AI systems will be able to do.

In conclusion, this is what is both exciting and fearful. An underlying task economy will continue drive agentic AI development which will enable a purpose drive AI system with unlimited resources to operate at a scale and speed never seen before.

Human intelligence is very limited. Our context window is tiny compared to what AI's can be. But we are purpose driven being. We interact with other intelligent beings to form systems. These systems interact with each other towards some purpose. What drives purpose is reinforcement and forming systems that support them.

A similar architecture can evolve for AI as well. The good or evil is where humans drive the purpose. The doomsday is when AI drives its own purpose.









Oct 30, 2024

AI productivity boost will get absorbed


 

Hasn't it happened forever? The invention of the wheel, farming, the Industrial Revolution, and computers. So then the question is, where will it get absorbed?

One topic that often comes up is how AI will affect jobs at scale, increase ARR per employee, and reduce the amount of workforce needed. That all seems plausible. But development is all about solving problems at the edge. AI will change that edge.

At the time when MS Office and Windows were on the growth, a quote often used was "Business at the speed of thought". It did happen, but those businesses changed. Business at the edge where we operate  now does not happen at the speed of thought. It is business as usual.

And so it will be with AI. The new edge will be amazing. But magic always is a lot of work.


Jun 18, 2024

Our brains are not wired, they are weighted


 "I am wired that way," we used to say. But with LLMs, we now have a better way of modeling our brains. We can now say "I am weighted that way.".

Our brains are not wired. They are weighted. It is not the connections of neurons but the relative weights and strength of those neurons. 

This new way of thinking about how we think can be truly enlightening. Wiring is not something that can easily be changed. But weights can over time.

There is a lot of information on muscle building and body shaping. With the new LLM model of our brain, it seems the same learnings can also be applied to neuron building and brain thinking.

If you think of neurons as muscles and weights as their strength, then by directed focus and content consumption over time, we can change the weights of our brains, resulting in a completely different inference (thinking).

During my machine learning courses in school, we looked at how weights were computed with backpropagation. It was fascinating to see these floating point numbers in a network changing as we feed in data, what we call learning, and how they help improve prediction.

That concept has now extrapolated to GPT/LLM's and we now know that those weights capture understanding/intelligence and are useful in ways never imagined before.