Category: Uncategorized

  • Dhurandhar: The Revenge — India’s most anticipated film of 2026


    Film Preview · Bollywood · March 2026

    With a four-hour runtime, five languages, and a ₹1,300 crore legacy to live up to, Aditya Dhar’s sequel is either the boldest Bollywood bet of the decade — or its greatest triumph.


    When Dhurandhar released in 2025 and went on to gross over ₹1,354 crore worldwide, it didn’t just break box office records — it reset expectations for what Indian espionage cinema could be. Now, its direct sequel arrives on March 19, 2026, carrying the weight of one of Bollywood’s biggest franchises and a narrative that promises to be darker, more personal, and far more relentless.

    Director Aditya Dhar — who previously upended the genre with Uri: The Surgical Strike — returns as writer, director, and co-producer. The sequel picks up exactly where the first film’s climax left off: Hamza has just orchestrated the death of gang lord Rehman Dakait. He is now the undisputed king of Lyari. And he is barely holding together.

    A spy stripped down to his bones

    What made the first film compelling was its refusal to glamorise the intelligence operative. Hamza — real name Jaskirat Singh Rangi — was not a sleek, martini-sipping spy. He was an exhausted man deep in enemy territory, slowly losing himself to the role he played. The sequel leans even harder into that premise.

    Early reviews describe Ranveer Singh delivering a ferocious, dual-shaded performance where Hamza has fully stopped pretending he is merely playing a gangster. The action sequences, choreographed by international teams, are reportedly visceral and uncomfortable — not the stylised violence of commercial cinema, but the kind that leaves a residue.

    The revenge angle connects directly to the 26/11 Mumbai terror attacks. Having witnessed the orchestration of those strikes while embedded in Karachi’s criminal networks, Hamza’s mission transforms from infiltration to vengeance. His targets: ISI handler Major Iqbal (Arjun Rampal) and the shadowy architect known only as Bade Sahab — whose identity remains the film’s most closely guarded secret.

    The ensemble

    Alongside Ranveer Singh, the cast includes Sanjay Dutt as SP Chaudhary Aslam, R. Madhavan as IB Director Ajay Sanyal, Arjun Rampal as Major Iqbal, and Sara Arjun as Yalina. Rumours continue to swirl around Yami Gautam in a cameo and Emraan Hashmi potentially playing the film’s villain — a casting choice that would add enormous intrigue to the Bade Sahab mystery.

    The pan-India bet

    The original released only in Hindi. The sequel goes pan-India — Hindi, Tamil, Telugu, Malayalam, and Kannada — a significant expansion that signals the franchise’s ambition to become a truly national phenomenon. Special preview screenings on March 18 ride the festive momentum of Gudi Padwa, Ugadi, and Eid al-Fitr. The sole logistical challenge: a nearly four-hour runtime limits daily shows, requiring exceptional occupancy to outpace the predecessor’s records.

    Yet the franchise has earned that faith. Dhurandhar built a universe grounded in the real mechanics of terror financing, political syndicates, and geopolitical consequence — territory largely unexplored at this scale in Indian cinema. The sequel extends it into even more morally complex ground.

    Verdict: Whether The Revenge shatters its predecessor’s numbers or not, it is already a landmark — a major Hindi film that treats its audience as adults, its protagonist as flawed, and its genre as worthy of genuine craft.


  • The Bell Curve of Learning: From Palm Leaves to Prompts — and Beyond

    The Bell Curve of Learning: From Palm Leaves to Prompts — and Beyond

    How Humanity’s Quest for Knowledge Has Traced an Arc from Scarcity to Abundance to Dependency — and What Comes Next


    There is a shape to human history that we rarely step back far enough to see. It is not a straight line of progress. It is not a random scatter. For the story of how human beings learn — how they gather, store, transmit, and consume knowledge — the shape is something closer to a bell curve. And right now, we are living through its far right edge, in territory no generation has ever occupied before.

    To understand where we are, we need to understand the whole arc.

    The Left Tail: When Knowledge Was Survival

    The Age of Scarcity

    Cast your mind back not just decades but centuries. A student who wanted to learn faced obstacles so fundamental that the word “obstacle” barely captures them. Knowledge was physical. It lived in objects — manuscripts scratched onto palm leaves, papyrus scrolls, clay tablets, handwritten parchment — and those objects were rare, fragile, expensive, and jealously guarded.

    A monk copying scripture by candlelight was not engaged in a quaint ritual. He was performing one of the most critical acts of knowledge preservation available to his civilization. Every copy took months. Every copy could contain errors. Every fire, flood, invasion, or simple act of negligence could erase centuries of accumulated learning in an afternoon. The library of Alexandria did not burn once — the knowledge it contained burned with it, irretrievably.

    For the ordinary person — the student, the farmer’s child, the craftsman’s apprentice — access to formal knowledge was determined almost entirely by accident of birth. You learned what your parents knew, what your village knew, what the local priest or teacher knew. The idea of choosing what to study, of following intellectual curiosity wherever it led, was a luxury available to almost no one.

    The Hardships Were Not Metaphorical

    The difficulties were visceral and physical. Transportation to reach a teacher or school often meant days of travel on foot or by animal. Books, where they existed at all, had to be borrowed, copied by hand, or purchased at prices that represented months of income. Electricity did not exist — which meant that the hours available for reading and study were bounded by sunlight, and by the cost and availability of candles or oil lamps. A student studying into the night was burning money, literally.

    Teachers were scarce, unevenly distributed, and often inaccessible to students from the wrong social class, the wrong geography, or the wrong gender. The knowledge that did circulate was frequently incomplete, secondhand, or distorted by the ideological requirements of whoever controlled its reproduction and distribution.

    Yet — and this is important — the knowledge that was acquired under these conditions was deeply, almost violently, owned by the person who acquired it. You memorized because you had to. You debated because that was how you tested your understanding. You wrote laboriously because writing was thinking made visible. The friction of learning was terrible, but it produced something: a relationship between the learner and the knowledge that was intimate, hard-won, and durable.

    This is the left tail of the bell curve. Long. Hard. Slow. But not without its own kind of depth.

    The Ascending Slope: The Golden Age of Learning Infrastructure

    When the World Came Closer

    Then things began to change. Not all at once — the ascent of the bell curve spans roughly five centuries, accelerating sharply in the last one. But the direction was unmistakable: the barriers between the learner and the knowledge were falling, one by one.

    Gutenberg’s printing press in the fifteenth century was the first great democratization of knowledge. Suddenly, a book was not a unique artifact that took a monk months to produce. It was a reproducible object that could exist in thousands of identical copies. Errors could still exist — but they could also be corrected in the next edition. Ideas could travel across continents in years rather than generations. The Reformation, the Scientific Revolution, the Enlightenment — all of them were powered, at least in part, by the radical new availability of printed text.

    But the real inflection point was the twentieth century. Within a single lifetime, the infrastructure of learning was transformed beyond recognition.

    The Golden Age: Everything Became Feasible

    Public education systems spread across the globe. Schools reached villages that had never had them. Literacy rates that had barely moved for centuries began climbing — and then climbing steeply. Universities expanded from elite institutions serving tiny fractions of the population to mass systems educating millions.

    Transportation transformed the geography of learning. A student who might once have been confined to whatever existed within walking distance could now travel across a country, or across an ocean, to study at institutions that had previously been inaccessible. Scholarships, student loans, exchange programs — the financial and physical barriers to educational mobility were being dismantled, imperfectly but persistently.

    Electricity changed everything it touched. The studying day was no longer bounded by sunset. Libraries could stay open. Laboratories could run equipment. Recordings could be made, duplicated, distributed. Radio brought lectures into homes that had never seen a university. Television did the same, more vividly.

    And then came the internet — and with it, something that previous generations would have found literally miraculous. A student in a village in Bihar could watch a lecture by a professor at MIT. A child in rural Kenya could access the same mathematics curriculum as a child in London. A self-taught programmer in São Paulo could learn from the same resources as a computer science student at Stanford. Video lectures, digital libraries, online courses, Wikipedia, Khan Academy, YouTube tutorials — the middle of the bell curve arrived with extraordinary force and extraordinary generosity.

    The Peak: Abundant, Accessible, Almost Free

    At the top of the bell curve — roughly the period from 2000 to 2020 — the infrastructure of learning had achieved something genuinely remarkable. For the first time in human history, a motivated person with a smartphone and a data connection had access to more knowledge than the greatest libraries of the ancient world combined. The information was indexed, searchable, cross-referenced, and often free.

    This is the peak of the bell curve. Not perfect — access was still unequal, quality was still variable, the gap between information access and genuine understanding remained large. But as a system for making knowledge available to people who sought it, it was the best humanity had ever built.

    And then something happened that changed the nature of the question entirely.

    The Right Tail: The Era of On-Demand Answers

    The LLM Arrives

    The large language model does not merely make knowledge more accessible. It changes what “accessing knowledge” means.

    In every previous era — even the golden age of search engines and Wikipedia — learning required a transaction between the learner and the material. You searched, you found a source, you read it, you interpreted it, you synthesized it with what you already knew. The source remained external to you. The understanding had to be constructed inside your own mind.

    The LLM collapses this transaction. You type a question in natural language. You receive an answer in natural language. The synthesis, the summarization, the interpretation — all of it has already been done. The gap between question and answer, which in every previous era was where learning lived, has been closed.

    This is extraordinary. It is also, depending on how you look at it, deeply unsettling.

    The disclaimer that appears on virtually every LLM output — this model can make mistakes, please verify important information — is honest. But it contains a profound irony. To verify an LLM’s answer, you need exactly the kind of independent knowledge and critical thinking capacity that the LLM, used habitually, tends to atrophy. The tool warns you not to trust it completely, while simultaneously training you to depend on it totally.

    The New Hardship: Too Easy, Too Fast, Too Confident

    The hardship of the left tail of the bell curve was material. Cold rooms, expensive candles, scarce books, distant teachers. The hardship of the right tail is cognitive. It is the hardship of abundance.

    When every answer is available on demand, the motivation to construct your own understanding weakens. Why labor through a difficult text when you can ask for a summary? Why struggle with a mathematical derivation when you can ask for the steps? Why develop an argument from first principles when you can ask for an outline?

    Each of these shortcuts is individually harmless. Cumulatively, they may be producing a generation of people who are extraordinarily good at accessing synthesized answers and progressively less capable of generating original thought.

    The bell curve of learning may be bending downward on the right side — not because knowledge is becoming scarcer, but because the act of genuinely acquiring it is becoming rarer.

    The Asymmetry No One Talks About

    Here is what makes the right tail of this bell curve different from the left tail in a way that matters enormously.

    On the left tail, scarcity forced depth. The student who had access to twelve books read all twelve of them deeply, multiple times, argued about them, memorized them, internalized them. The knowledge was genuinely theirs.

    On the right tail, abundance enables breadth without depth. You can now have an LLM-mediated conversation about quantum mechanics, Renaissance art, contract law, and the history of the Ottoman Empire all in the same afternoon — and leave each conversation with the feeling of having understood, without the substance of having learned. The feeling of knowing is decoupled from the state of knowing.

    This is not a problem unique to LLMs — it began with search engines, and arguably with television before that. But LLMs accelerate it dramatically, because the conversational interface creates a more powerful illusion of genuine engagement than a web search ever did. When a search engine returns ten links, you know you have to read them. When an LLM returns a fluent, confident paragraph, you are tempted to believe the reading has already been done on your behalf.

    It has not. The LLM has pattern-matched to a statistically likely answer. The understanding has not transferred. The knowledge is not yours.

    Each Person, Their Own Curriculum — and the Fragmentation of Shared Knowledge

    There is another dimension to the right tail of this bell curve that deserves attention: the personalization of knowledge itself.

    In the golden age of learning infrastructure, shared knowledge was a social fact. Students in the same school read the same books, heard the same lectures, sat the same examinations. This created common intellectual frameworks, shared references, a substrate for conversation and debate. You knew what others had learned because the curriculum was, to a significant degree, collective.

    LLM-mediated learning is radically individualized. Each person receives answers tailored to their specific question, framed in their preferred style, pitched at their preferred level. This is one of the technology’s great strengths. But it also means that two people using LLMs to learn about the same topic may end up with entirely different — and potentially incompatible — understandings of it, because they asked different questions and received different framings.

    The shared knowledge commons, already under pressure from social media’s fragmentation of information, is being further atomized by AI personalization. We are heading toward a world in which people do not merely hold different opinions, but inhabit different factual universes — each one feeling well-informed, because their LLM gave them confident, fluent answers.

    So Where Is the Curve Going?

    If the left tail is scarcity, and the peak is abundance, and the right tail is dependency — what lies beyond the right tail?

    This is not a rhetorical question. The bell curve is a useful model precisely because it implies a descent on the other side. And there are two plausible versions of that descent.

    The Dark Descent: Cognitive Atrophy

    In one version, the right tail continues downward because the skills that the bell curve peak produced — critical thinking, synthesis, independent analysis, the ability to evaluate sources — are no longer being systematically built. A generation educated primarily through LLM interaction will know how to prompt. It may not know how to think. And when the LLM is wrong — which it frequently is — they will lack the independent knowledge base to recognize the error.

    This is not a new fear. Every new information technology has generated it. Socrates worried that writing would destroy memory. Critics of the printing press worried it would spread heresy and error unchecked. Educators worried that television and then the internet would produce passive consumers of information rather than active thinkers. Some of those fears were exaggerated. Some were not.

    The LLM case is different in degree, if not in kind. Previous technologies changed the delivery of information. LLMs change the construction of understanding. That is a more fundamental intervention.

    The Hopeful Ascent: A New Kind of Learning

    In another version, the curve does not simply descend. It begins a new shape — a second bell, built on different foundations.

    The optimistic reading of the LLM era is that it frees human cognition for higher-order tasks by handling lower-order ones. Just as the printing press freed scholars from the labor of copying manuscripts so they could spend more time thinking, LLMs can free learners from the labor of information retrieval so they can spend more time on analysis, creativity, and synthesis.

    This is possible. But it requires that we make a conscious decision to use these tools as amplifiers of human thought rather than replacements for it. It requires educational systems that teach prompting alongside critical evaluation. It requires learners who understand what an LLM actually is — a sophisticated pattern-matching system, not an oracle — and who maintain the independent knowledge base needed to interrogate its outputs.

    What Technology Is Waiting in the Wings?

    And beyond LLMs — what comes next? The honest answer is that we do not know with certainty. But the trajectory suggests a few possibilities.

    Brain-computer interfaces represent the most radical transformation: knowledge not retrieved or even read, but directly integrated into cognition. The friction of the prompt-response cycle — already dramatically reduced compared to reading — would be eliminated entirely. What would learning even mean in a world where information can be uploaded rather than acquired?

    Ambient AI — systems that observe your environment, your work, your questions, and provide answers without being asked — would remove even the prompt. Knowledge would simply appear when needed, contextually, invisibly.

    Collective intelligence systems — networks in which human and AI cognition are so tightly interwoven that the boundary between individual knowledge and shared knowledge becomes meaningless — represent perhaps the most philosophically vertiginous possibility.

    Each of these technologies would push the curve further to the right — more access, less friction, more dependency, more fundamental questions about what human understanding means.

    The Question the Bell Curve Forces Us to Ask

    The bell curve of learning is ultimately a curve about friction. The left tail was high friction — getting knowledge was hard, slow, and expensive. The peak was moderate friction — knowledge was available, but still required effort to acquire and understand. The right tail is low friction — answers arrive instantly, fluently, and without demanding that you do the work of constructing understanding yourself.

    The question the curve forces us to ask is: was the friction the problem, or was it part of the process?

    The optimist says the friction was the problem — an accident of historical limitation, not an intrinsic feature of learning. Remove the friction, and you liberate human potential.

    The pessimist says the friction was the process — that the struggle to acquire and construct knowledge was not a bug but the feature, that genuine understanding requires effort by definition, and that tools which eliminate the effort also eliminate the understanding.

    The truth is probably somewhere between — and the answer depends heavily on what we do with the friction we have recovered. If we use the time and cognitive energy that LLMs free up for deeper inquiry, more creative work, more ambitious questions — then the right tail of the bell curve is not a descent but a launching pad.

    If we use it to scroll, to accept the first answer we receive, to outsource not just information retrieval but judgment itself — then the descent is real, and the next technology will accelerate it further.

    Conclusion: The Manuscript and the Prompt

    There is a strange symmetry between the two extremes of this bell curve. The student reading a manuscript on palm leaves and the student typing into a chat interface are both, in some sense, alone with a text. Both are seeking understanding. Both face limitations — one the limitation of scarcity, one the limitation of abundance.

    The difference is what the engagement demands of them. The manuscript demanded everything: attention, effort, memory, interpretation. The prompt demands very little — just the ability to formulate a question and evaluate an answer. And the ability to evaluate an answer, as we have seen, requires exactly the kind of deep knowledge that the prompt is being used to avoid building.

    We are not heading back to palm leaves. The bell curve does not reverse. But we are at a moment of genuine choice about how we inhabit the right tail of its arc — whether we allow it to be a place of cognitive atrophy, or whether we use it, deliberately and wisely, as the foundation for a new kind of learning that no previous era could have imagined.

    The next technology is already being built in laboratories around the world. Whether it extends human intelligence or replaces it will depend less on the technology than on the choices we make right now, while we still remember what it felt like to struggle toward understanding — and why that struggle mattered.

    The palm leaf is gone. The printed book is endangered. The prompt is king. The question is not what tool we use to learn — it is whether we are still, in any meaningful sense, learning at all.

  • India’s LPG Crisis: When a Strait Holds a Billion Kitchens Hostage

    India’s LPG Crisis: When a Strait Holds a Billion Kitchens Hostage

    March 2026 — and India’s most essential kitchen fuel is suddenly, alarmingly scarce.


    The Scene on the Ground

    Long queues outside gas agencies. Restaurant owners staring at empty cylinders. Hotel kitchens going cold. Community kitchens rationing their last reserves. The LPG shortage has caused panic buying, long queues at gas agencies, commercial shutdowns, and rising black-market prices — especially in major cities like Delhi, Mumbai, Bengaluru, Chennai, and Kolkata. In some cities, cylinders are reportedly being sold on the black market at ₹5,000 per cylinder, far above the official price.

    For hotels, hostels, restaurants, and community kitchens — the commercial backbone of India’s food economy — the situation has moved from inconvenience to existential threat. As one industry voice put it plainly: “It is a question of survival, not viability. Yes, the costs will be slightly higher. But we need to manage that.” Those with electric cooking equipment are now breathing easier. Those without are scrambling.


    How Did We Get Here? The Root Cause

    One Strait. One Shock. One Billion Kitchens.

    The crisis has a single, dramatic trigger: the escalating conflict in West Asia. For the first time in recorded history, the Strait of Hormuz — through which 20% of global crude, 20% of global natural gas, and 20% of global LPG flows — has been effectively closed to commercial vessels.

    For most countries, this would be serious. For India, it is a structural emergency.

    India’s LPG imports account for around 60% of domestic consumption, and about 90% of those imports normally move through the Strait of Hormuz. Thus, roughly 54% of normal LPG availability is under direct exposure if the corridor remains shut.

    Why India Is So Exposed

    India’s LPG supply system relies heavily on continuous imports arriving on schedule. India currently has LPG storage capacity of around 1.34 million tonnes, yet the country consumes roughly 90,000 tonnes of LPG every day — meaning existing storage infrastructure can cover only about two weeks of national consumption. Because LPG must be stored under pressure or at very low temperatures, building large storage buffers is far more complex and expensive than storing petrol or diesel.

    Petrol and diesel remain widely available in India because the country has strong refining capacity, diversified crude imports, and better storage buffers. LPG, however, relies heavily on imports passing through a single critical shipping route. That asymmetry is now playing out in real time.

    Why Petrol Is Fine but Gas Is Not

    This puzzles many people. The answer is structural. LPG is not a by-product of crude refining in the same proportion as petrol or diesel. Even if refineries manage to increase LPG output by 10–20% above current domestic production, domestic supply would only rise to roughly 47–50% of total demand, leaving a significant gap that must still be filled through imports. There is no domestic production fix large enough to close a 54% import hole overnight.


    The Human and Economic Cost

    Restaurants on the Brink

    The National Restaurant Association of India (NRAI) has called it a “crisis situation” that will lead to the closure of many restaurants. NRAI president Sagar Daryani told CNBC that 90% of restaurants in India rely on LPG cylinders to run their kitchens, and that if the LPG supply issues persist, it would lead to “closure of business and job losses.” The NRAI represents over 500,000 restaurants across India — an industry generating annual turnover of over ₹5.7 trillion and employing over 8 million people.

    Nearly 10,000 establishments were set to shut down in Tamil Nadu alone, including the majority of small and medium-sized restaurants, according to the Chennai Hotel Association. Mumbai’s AHAR lobby group warned that many of its members are on the “verge of closure.”

    The Household Anxiety

    As of January 2026, India has around 332.1 million active domestic LPG connections and 104.29 million Pradhan Mantri Ujjwala Yojana (PMUY) connections — connections that were specifically created to bring clean cooking to India’s poorest households. The irony is sharp: a welfare programme built to lift people off firewood and coal could, if the crisis deepens, push them back toward exactly those fuels.

    The LPG shortage threatens to push poorer Indian households back to coal — exactly what the Modi government’s Ujjwala scheme spent years phasing out.


    What the Government Is Doing

    The government has moved quickly, invoking emergency powers and restructuring the supply chain:

    Emergency legal orders: The government issued the Natural Gas Control Order on March 9, 2026, under the Essential Commodities Act, 1955. The LPG Control Order issued on March 8, 2026 directed all Indian refineries to maximise LPG yields by channelling all C3 and C4 hydrocarbon streams — propane, butane, propylene, and butenes — exclusively to Oil Marketing Companies for domestic cooking gas. LPG production increased by 28% within just 5 days of the directive.

    Household prioritization over commercial supply: The government made a difficult but deliberate choice — directing oil marketing companies to prioritize supplying LPG to the 330 million households that use it as a primary cooking fuel, over the 3 million businesses that use commercial cylinders.

    Supply diversification: India is increasing LPG sourcing from the US, Norway, Canada, and Russia. India had already arranged a 2.2 MTPA US LPG deal for 2026, equivalent to about 10% of annual imports. On March 15–16, 2026, two Indian-flagged LPG carriers — Shivalik and Nanda Devi — successfully crossed the Strait of Hormuz carrying more than 92,000 metric tonnes of LPG to India. A partial relief, but still just about 5% of monthly import needs.

    Anti-hoarding and price controls: The government has taken the responsible course — to regulate commercial LPG with clear priorities and a transparent allocation mechanism. A three-member committee comprising Executive Directors from IOCL, HPCL, and BPCL was constituted on March 9, 2026. In a major decision, 20% of the average monthly commercial LPG requirement will be allocated by OMCs in coordination with state governments to ensure there is no hoarding or black marketing.

    Price protection for the poor: Despite the Saudi Contract Price rising 41% between July 2023 and March 2026, the PMUY beneficiary price has actually fallen 32% in the same period and stands at ₹613 per 14.2 kg cylinder in Delhi. The non-subsidised consumer price stands at ₹913, against a market-determined price of approximately ₹987.


    The Alternatives: What Do You Do When the Gas Runs Out?

    For Households

    Induction cooktops are the fastest, cleanest pivot. Induction stove sales have already surged amid fears of shortage, showing how quickly households respond when reliability is threatened. These appliances use electricity to heat cookware directly and can perform many tasks normally handled by LPG stoves. The upfront cost is modest, running costs are comparable, and the technology is mature.

    Microwave ovens offer a partial bridge — microwaves can help households manage short-term LPG shortages by reheating leftovers and preparing quick meals. Ready-to-eat food, frozen items, and simple recipes can be handled without using gas stoves.

    Biomass and traditional stoves remain a short-term rural fallback, though they reverse hard-won public health gains from the clean cooking transition.

    For the Hospitality and Restaurant Sector

    The commercial kitchen is where the survival calculus is starkest. Hotels and restaurants that invested in electric or induction-based cooking infrastructure are now quietly relieved. Those fully dependent on LPG cylinders face the hardest choices.

    The Ministry of Environment, Forest and Climate Change has advised State Pollution Control Boards to permit, for the duration of the crisis, the use of biomass, RDF pellets, and kerosene/coal as alternate fuels for the hospitality and restaurant segment for one month — enabling a wider range of establishments to switch and free up LPG for priority consumers.

    The medium-term answer for urban commercial kitchens is a mix of electric induction cooking, piped natural gas (PNG) where infrastructure exists, and biogas where feasible.

    The Longer Structural Answer

    More durable domestic hedges include electric cooking and bioenergy. India has 143.60 GW of cumulative solar capacity, and 195 compressed biogas plants are being set up across the country. Over time, a diversified cooking-energy mix would reduce exposure to any single external fuel route.

    Piped Natural Gas can help in urban areas, though it also partly depends on imported gas. The longer-run problem concerns energy security more broadly — a country of India’s scale cannot allow clean-cooking resilience to depend excessively on a single imported fuel passing through a single strategic chokepoint.


    The Structural Lesson India Cannot Afford to Ignore

    This crisis is not just about a war. It is about decades of policy choices that concentrated India’s cooking energy dependency on a single imported fuel, sourced predominantly from a single geopolitical region, flowing through a single maritime chokepoint.

    The resilience-efficiency trade-off is stark: concentrated sourcing is cheaper in stable periods, but diversified supply networks reduce vulnerability when shocks hit. India should treat alternative supply contracts as insurance rather than as an expensive deviation from normal procurement. More storage, terminal flexibility, rail evacuation, and pipeline connectivity would reduce the cost of using that insurance.

    The good news, if there is any in a crisis, is that this disruption is forcing exactly the diversification that energy security analysts have long recommended. Induction cooktop sales are surging. PNG connections are being sought out. Biogas is being reconsidered. Households and businesses that were waiting for a reason to transition are now finding one.


    Conclusion: Survival Mode is Also a Signal

    The hotels, hostels, and community kitchens now rationing their gas, eyeing electric cooktops, and calculating revised cost structures are not just managing a crisis. They are glimpsing the future of India’s cooking energy mix — one that is more distributed, more resilient, and less hostage to a single narrow strait on the other side of the world.

    The transition will cost more in the short run. But as the restaurant owner said — it is a question of survival, not viability. And sometimes survival is the best teacher.


    Written in March 2026, as the crisis continues to unfold.

  • Agentic AI in B2B & FinTech: The Autonomous Operations Revolution

    Agentic AI in B2B & FinTech: The Autonomous Operations Revolution

    What Is Agentic AI?

    Artificial Intelligence has passed through many evolutionary stages — from rule-based expert systems to statistical machine learning, and from narrow classifiers to large language models. But a fundamentally different paradigm is now taking center stage: Agentic AI.

    Unlike traditional AI that predicts or recommends, Agentic AI can autonomously plan, decide, and execute multi-step tasks to achieve a goal — often interacting with other systems, data sources, or agents in the process. It does not wait to be told what to do. It observes its environment, reasons about the situation, makes a decision, and takes action. Then it learns from the outcome.

    In B2B environments, Agentic AI acts like a digital operations manager rather than just a prediction tool.

    This distinction reshapes entire operating models. In supply chains, procurement decisions get made without waiting for a human to review a dashboard. In finance, fraudulent transactions get blocked in milliseconds. In sales, the pipeline manages itself. The implications — for productivity, for the nature of work, and for competitive dynamics — are profound.


    The Agentic AI Loop

    All agentic systems follow a common operational loop:

    1. Observe — Collect data from systems, sensors, and APIs in real time.
    2. Reason — Analyze data using AI models to understand what is happening and why.
    3. Plan — Generate and evaluate potential strategies or responses.
    4. Act — Execute the chosen action through APIs, software, or direct commands.
    5. Learn — Update models and decision parameters based on outcomes.

    This loop runs continuously, without requiring a human to initiate each cycle. That is what separates agentic AI from conventional automation or analytics — it is not triggered by a report or a workflow button; it is always running, always watching, always acting.

    The architecture that makes this possible is often described as AI agents + orchestration layer + enterprise data. The orchestration layer coordinates agents, manages state, routes tasks, and handles failures. Without it, autonomous action at scale is not possible.


    Traditional AI vs. Agentic AI

    Traditional AIAgentic AI
    PredictsActs
    Single taskMulti-step workflows
    Human executes decisionAI executes decision
    Static modelsAutonomous agents

    Traditional AI answers: what is likely to happen? Agentic AI answers: what should I do about it? — and then does it. This difference is the entire distance between a dashboard and an autonomous operator.


    Part I: Agentic AI in B2B Operations

    Business-to-business operations involve enormous complexity — thousands of suppliers, fluctuating logistics networks, dynamic sales pipelines, and global procurement decisions made under time pressure. These are precisely the conditions where autonomous agents deliver the most value.

    1. Manufacturing Supply Chain Agent

    Industry: Industrial Manufacturing — Siemens, Bosch

    A large manufacturing company producing electric motors faces a sudden crisis: a critical component supplier in Taiwan reports a two-week delay. In a traditional setting, a procurement manager would need to discover the delay, assess its downstream impact, research alternatives, negotiate with new suppliers, and update production schedules — a process that itself could take days.

    With an Agentic Supply Chain AI deployed, the response is immediate:

    1. Detect disruption — The agent continuously monitors supplier portals and logistics data, catching the delay the moment it is reported.
    2. Analyze impact — It simulates how the delay ripples through factory schedules, inventory levels, and customer delivery commitments.
    3. Generate options — Three alternatives are identified: Supplier B in Vietnam, Supplier C in India, and a temporary product redesign using an alternative component.
    4. Evaluate costs and risks — Vietnam delivers on time at an 8% cost increase; India is 3% cheaper but slower on logistics; redesign requires two weeks of engineering time.
    5. Take action autonomously — The agent negotiates digital procurement contracts, adjusts production schedules, and informs logistics teams — all without human intervention.

    Result: Production disruption avoided without human intervention. The system observed → reasoned → decided → executed.


    2. B2B Sales Pipeline Agent

    Industry: Enterprise Software — Salesforce

    A SaaS company selling cybersecurity software to banks receives hundreds of leads weekly. Most are unqualified. Traditionally, sales representatives would manually triage leads, research companies, draft outreach, and schedule meetings — an enormous and error-prone workload.

    A Sales Agent AI takes over the entire pipeline:

    • Monitors incoming leads from website forms, LinkedIn engagement, and webinar attendees.
    • Autonomously researches each company — gathering data on size, IT spending, and regulatory pressure.
    • Scores and qualifies leads using probability models.
    • Sends personalized emails, schedules meetings, and assigns high-value leads to senior managers.
    • When a sector suddenly shows high interest (e.g., fintech), dynamically redirects marketing spend.

    Result: Sales team productivity increases 40%. The AI acts like a digital sales manager — not just a recommender.


    3. Logistics Optimization Agent

    Industry: Global Logistics — Amazon

    A logistics firm managing 10,000 daily shipments across Asia faces constant disruption: port congestion, weather events, customs delays. At this scale, manual re-routing is not feasible.

    The Logistics Optimization Agent operates across three dimensions simultaneously:

    • Real-time monitoring of weather systems, port capacity, and trucking routes.
    • Predictive disruption modeling — detecting, for example, a cyclone forming near Chennai port before it causes delays.
    • Automatic re-planning: rerouting cargo through Singapore, booking alternative ships, and immediately notifying warehouse managers.

    A key capability is multi-agent coordination: a shipping agent, an inventory agent, and a delivery agent work in concert through a shared orchestration layer.

    Result: Delivery reliability improves from 88% to 96%.


    4. Financial Procurement Agent

    Industry: Corporate Finance

    A large automotive firm spends ₹5,000 crore annually on procurement. Manual vendor comparison is slow and inconsistent. The Procurement Agent transforms this entirely:

    1. Reads purchase requirements automatically from internal systems.
    2. Searches supplier databases globally.
    3. Evaluates vendors across cost, reliability, and sustainability scores.
    4. Runs negotiation simulations to predict optimal contract terms.
    5. Suggests contracts and automatically sends RFQs to shortlisted vendors.

    Result: Procurement costs reduced 7–12% annually.


    Part II: Agentic AI in FinTech & Financial Services

    Financial services are arguably the domain where Agentic AI is making the most dramatic impact. The sector’s defining characteristics — real-time transaction streams, large data volumes, high-stakes decisions, and strict regulatory requirements — make it ideal terrain for autonomous agents.

    In fintech, Agentic AI acts like a digital financial manager capable of monitoring systems, making risk decisions, executing transactions, and learning from market behavior.

    1. Autonomous Fraud Detection Agent

    Example firms: Visa, Mastercard

    A global payment network processes millions of transactions per minute. One evening, an unusual pattern emerges: many small transactions appearing from different countries through the same merchant gateway. In the time it takes a human analyst to notice the pattern, hundreds of fraudulent transactions could already be complete.

    The fraud detection agent responds in real time:

    • Continuously scans transaction streams for anomalous patterns.
    • Compares detected patterns against a historical database of known fraud signatures.
    • Considers options: blocking transactions, verifying customer identity, alerting the issuing bank.
    • Acts: temporarily blocks suspicious transactions, sends real-time alerts to banks, requests OTP verification from customers.

    Result: Fraud losses prevented within seconds, without manual monitoring.


    2. Autonomous Investment Portfolio Agent

    Example fintechs: Betterment, Wealthfront

    A pension fund allocates ₹500 crore to a robo-advisory platform. The Agentic AI portfolio system works continuously:

    • Understands investor objectives — risk tolerance, liquidity needs, regulatory limits.
    • Continuously monitors bond yields, equity volatility, and macroeconomic indicators.
    • Simulates thousands of portfolio combinations — generating, for example, an allocation of 35% global equities, 30% government bonds, 20% ETFs, 15% commodities.
    • Autonomously rebalances when volatility rises: reducing equities, increasing bonds and gold.

    Result: Portfolio risk optimized dynamically without manual intervention.


    3. Autonomous Credit Underwriting Agent

    Example fintech: Upstart

    A digital lender receives 10,000 loan applications per day. Traditional banks use only credit score and income — a blunt instrument that excludes many creditworthy borrowers. Agentic AI underwriting works differently:

    • Data aggregation: collects bank transactions, employment history, education records, and spending patterns.
    • Risk modeling: machine learning models predict probability of default with far greater accuracy.
    • Decision planning: evaluates whether to approve, reject, adjust loan amount, or change interest rate.
    • Execution: loan approvals and pricing are issued automatically.

    Result: Lending decisions in minutes instead of days, with higher financial inclusion.


    4. Autonomous Treasury Management Agent

    Example firms: JPMorgan Chase, HSBC

    A multinational company operates in 25 countries, facing the daily challenge of managing liquidity across currencies and time zones. The Treasury Agent runs a continuous optimization process:

    • Monitors global cash positions across all bank accounts and subsidiaries.
    • Predicts cash flows from incoming payments, invoices, and payroll.
    • Moves excess cash to high-yield accounts, executes FX hedging trades, schedules short-term investments.
    • Automatically triggers payments and treasury trades.

    Result: Treasury operations become real-time and continuously optimized.


    5. Autonomous Compliance Monitoring Agent

    Example: FINRA compliance systems used by banks

    Investment banks must monitor thousands of trader communications daily for potential regulatory violations. This workload vastly exceeds human capacity for real-time review. The Compliance Agent operates across four automated stages:

    1. Monitors emails, chats, and trade records continuously.
    2. Detects suspicious language or insider trading signals using NLP.
    3. Correlates communications with trading activity to identify suspicious timing.
    4. Automatically escalates cases to compliance officers when thresholds are crossed.

    Result: Regulatory risk reduced and audits become significantly more tractable.


    Emerging Agentic AI Applications in FinTech

    AreaExample Application
    Digital BankingAutonomous customer service agents
    LendingAI underwriting and loan approval
    TradingAutonomous trading strategy agents
    InsuranceClaims verification agents
    PaymentsFraud prevention agents

    Part III: Real-World Case Studies

    1. JPMorgan’s COIN — Saving 360,000 Hours

    System: COiN (Contract Intelligence)

    Every year, JPMorgan reviews thousands of commercial loan agreements. Traditionally, lawyers manually extracted details — collateral requirements, payment schedules, risk clauses — from lengthy contracts. This took hundreds of thousands of hours annually.

    The COiN system autonomously reads legal documents, scans loan contracts, extracts key clauses, flags unusual risk conditions, and routes risky contracts to legal teams.

    Result: The equivalent of 360,000 hours of legal analysis automated. Contract reviews that took weeks now take seconds. The system monitors, analyzes, and triggers actions without human review for the vast majority of cases.


    2. PayPal’s AI Fraud Defense

    During a global shopping festival, millions of transactions were processed within minutes. Fraudsters attempted to exploit the surge through fake accounts, rapid small transactions, and stolen cards. PayPal’s AI risk system autonomously detected abnormal transaction velocity, compared patterns with historical fraud, blocked suspicious accounts, and requested identity verification.

    Result: Fraud attempts stopped in real time. Billions of transactions protected. The system operates as an autonomous financial security agent.


    3. Ant Group’s Instant Loan Decisions

    Platform: Alipay credit services

    Small businesses in China historically struggled to secure bank loans due to lacking traditional credit history. A restaurant owner applying for a small business loan illustrates the model: the lending AI automatically analyzed digital payment history, customer traffic, supplier payments, and tax data. Within minutes it calculated credit risk, approved the loan, and transferred funds.

    Result: Loans that previously took weeks were issued in 3 minutes. A classic agentic financial decision system — it evaluates, decides, and executes.


    4. BlackRock’s Aladdin Platform

    BlackRock manages trillions of dollars in assets for pension funds globally. When markets became volatile during a financial shock, the Aladdin platform automatically monitored portfolio exposures, simulated thousands of risk scenarios, identified high-risk portfolios, and suggested hedging strategies. Fund managers received real-time alerts and recommendations.

    Result: Investors could react to market shocks within minutes instead of days.


    5. Stripe Radar — AI Payment Risk

    An e-commerce company using Stripe experienced a surge of international orders, many of which were fraudulent. Stripe’s AI agent analyzed billions of payment patterns, detected suspicious card behavior, automatically blocked high-risk payments, and allowed legitimate customers through seamlessly.

    Result: Fraud losses significantly reduced while maintaining smooth checkout experiences.


    Why Finance Is Leading the Agentic AI Transition

    Financial systems are uniquely suited to autonomous agents because they combine four characteristics that make agentic operation both possible and necessary:

    • Real-time decisions — financial markets and transactions do not pause for human review.
    • Large data streams — continuous, structured data that agents can observe and process at scale.
    • Risk management — the cost of inaction or error is high, creating strong incentives for automated vigilance.
    • Automated execution — financial systems already have APIs and infrastructure capable of executing decisions programmatically.

    This is why virtually every major financial institution — JPMorgan, BlackRock, PayPal, Ant Group, Stripe — has deployed systems with agentic characteristics. The pattern is unmistakable: observe, reason, decide, act, learn.


    The Ten-Year Horizon

    Looking ahead, the domains most likely to see dominant agentic AI deployment over the next decade include supply chains, procurement, enterprise sales, IT operations, and financial planning. But the deeper shift is the convergence of Agentic AI + Generative AI + Operations Research — agentic systems that will not only execute decisions but generate novel strategies, simulate organizational futures, and optimize complex multi-objective tradeoffs at a scale no human team could sustain.

    The question for every organization is no longer whether to adopt Agentic AI, but how quickly and thoughtfully to integrate it.


    Conclusion

    Agentic AI represents a genuinely new category of technology — not because the underlying methods are entirely novel, but because the integration of reasoning, planning, and autonomous execution creates systems that behave qualitatively differently from anything before.

    From a supply chain agent rerouting components around a Taiwan supplier delay, to JPMorgan’s COIN automating 360,000 hours of legal work, to Ant Group issuing loans in three minutes — these systems share a common architecture and a common outcome: complex, consequential decisions made and executed faster, more consistently, and at greater scale than human operations allow.

    The organizations building these systems are not simply automating existing workflows. They are redesigning their operating models around the capabilities of autonomous agents. As agentic systems become more capable and widespread, the gap between organizations that have integrated them and those that have not will widen rapidly.

    Understanding this shift — its architecture, its applications, and its strategic implications — is one of the defining intellectual challenges for management researchers, practitioners, and policymakers in the years ahead.