MoreRSS

site iconManas J. SaloiModify

A product leader, has held key product management roles at Gojek, Directi, Craftsvilla, CouponDunia and Kore, responsible for product development and growth.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Manas J. Saloi

There is always a queue

2025-03-12 08:00:00

There is always a queue.

Every marketplace claims to be homogeneous and insists it does not differentiate between its demand and supply, but this is not true.

Marketplaces essentially function as a queue management system, balancing demand (users who want something) and supply (those who provide it).

The goal is to organize demand and supply into clear queues. Once structured, the marketplace can dynamically adjust priority based on users’ willingness to pay and suppliers’ willingness to be utilized.

When many users request rides at the same time, their requests form a queue. Typically, matching a user with a driver takes 10 to 30 seconds.

Jumping the Queue: Users can move ahead in the queue by showing a willingness to pay more, which can be done through:

  • Adding a tip
  • Choosing premium or priority services
  • Increasing their bid or fare

Waiting to Save: Some users prefer to save money by waiting longer or booking at off-peak times. By accepting a lower priority, they pay less. Suppliers can choose to accept these users when demand is low or ignore these bids. It is up to them to decide how to maximize earnings per hour. In some cases, as a user you can even negotiate the price down instead of up if there is sufficient competition among suppliers while demand remains low. There are marketplaces where suppliers bid and the users select rather than the other way around. There are ride-hailing companies that even let drivers purchase a small booster that increases their order income by X% for the next Y hours. This is essentially paying to opt into a higher surge or bonus bracket; while it might seem counterintuitive (drivers paying the platform), some drivers use it strategically when they know demand will be high to maximize their earnings.

This principle extends beyond ride-hailing.

Home Services (Urban Company): Users who pay a premium receive faster service allocation. Others can opt for standard pricing or book during low-demand hours for lower rates.

Restaurant Reservations: At peak times, users may pay extra (a cover charge or premium) to secure prime time slots. Conversely, restaurants offer discounts during off-peak hours to attract customers when demand is low.

Summary: Marketplace efficiency relies on a dynamic queue system—where a user’s position is determined by how much they’re willing to pay and how long they’re willing to wait. The same applies to suppliers. This is a way to maximize revenue and optimize utilization.

What Makes an AI Company?

2025-03-12 08:00:00

What makes an AI Company?

A simple test: remove the core AI component - does the company still function well, attract users, and maintain similar revenue growth? If the answer is no, then AI isn’t just an add on.

It needs to be the foundation of the business.

Take Cursor, for example. Its standout feature, Composer, powers the agentic coding experience that vibe coders rely on. Even without Composer, developers might still use Cursor for AI driven chat with their codebase or enhanced autocomplete. But strip away all AI features, and Cursor is just VS Code, its core differentiation disappears.

Now, consider ride hailing. Machine learning has always been a part of it - customer segmentation, supply positioning, pricing models. But if you removed ML (distinct from AI in this case, where AI refers to LLMs), the system would still function. Matching could default to shortest distance allocation, and pricing could rely on deterministic logic based on past data. AI enhances efficiency, but the product doesn’t depend on it.

Look at Apple Notes, Google Suite, and Meta products. They’ve integrated AI, but has it driven explosive user growth or revenue? Not really. Most people would still use them without AI, and few would pay extra for the current AI powered features.

For companies like Cursor, AI isn’t just a feature - it’s the engine behind their insane ARR growth. It attracts users who might have never used VS Code but can now build prototypes effortlessly on Cursor.

That’s the real distinction between an AI company and a company that merely uses AI.

A cold email I liked

2025-03-11 08:00:00

Cold email I liked

Related links: How to do a cold outreach right

Google’s agentic future

2025-03-10 08:00:00

I’ve been thinking a lot about agentic software lately and how it’s going to transform our workplaces. While everyone’s experimenting with AI assistants and cobbling together solutions with tools like LangChain and Crew AI, I believe Google actually has the best shot at making AI agents a seamless part of our daily work lives. Let me explain why.

Think about Google’s product ecosystem for a second. You’ve got Gmail, Meet, Drive, Docs, Sheets, Slides - all these powerful tools operating on the same layer. But what’s missing is a meta layer above all of these - an orchestration layer that ties everything together. Essentially, an operating system for work.

Picture a knowledge worker (like a product manager) operating primarily from this layer rather than jumping between individual apps. This orchestration view would give you:

  • A unified task list
  • Recently updated documents
  • Contextual recommendations
  • A single place to instruct your AI agents

Consequently, without this layer, your work remains fragmented across multiple apps, and you’re constantly moving context around manually. With it, you get a cohesive workspace where information flows naturally.

Ultimately, this will be the unified view for the product manager. Not Slack. Not Drive. Not Jira.

How Would This Actually Work?

Let’s walk through a couple of concrete examples:

Example 1: Propagating strategic updates

Your CEO holds an all-hands meeting where she announces a major change: “We’re shifting our revenue growth target from 10% to 20% for the next fiscal year.”

Current workflow: You attend the meeting, take notes, and then manually update numerous documents - your roadmap spreadsheet, business models, planning docs, and presentation decks. You hunt down each relevant file, make changes one by one, and hope you haven’t missed anything.

With an agentic layer: You simply tag the key insight from the Meet transcript: “Growth target increased to 20% from 10%.” Your Google agent then:

  1. Identifies all relevant documents using Google’s powerful search capabilities
  2. Drafts updates to your roadmap in Sheets
  3. Modifies growth projections in your business models
  4. Updates planning documents in Docs
  5. Presents all these changes for your review in a unified interface (it can be inside the Meet client, how you can accept changes inside a file on Cursor, or because it will take time, you can just leave Meet, and just accept the changes at the OS view (equivalent to Composer view on Cursor))

You review the suggested updates, approve them (perhaps with some tweaks), and the changes propagate across your workspace. What might have taken hours now takes minutes.

Example 2: Creating Product Specs on the Fly

Now let’s look at how this might work within Google Docs specifically. Imagine you’re a product manager tasked with creating a spec for a new feature that will help hit that ambitious 20% growth target. You open a blank Google Doc to start drafting.

Current workflow: You manually gather context from multiple sources - you open your business projections spreadsheet in another tab, pull up design mockups, reference competitor analysis docs, check the technical limitations from engineering notes, and try to keep all this context in your head while writing a coherent spec from scratch.

With the agentic assistant: Just like how Cursor has Composer chat always available on the right-hand side for coding, Google Docs would have an always-present AI sidebar. There you can ask the agent to co-work with you on the spec. You start by telling it:

“I need to create a spec for our new Premium tier feature. This needs to contribute to our new 20% growth target.” The assistant responds: “I can help with that. What information should we incorporate?”

You reply: “Let’s use the Q3 business projections spreadsheet, the competitive analysis from last month, and those new mockups the design team shared yesterday.” You can even tag the relevant files like you do on Cursor.

The assistant then:

  1. Instantly pulls in the relevant data from your linked spreadsheet
  2. Analyzes the competitive landscape from your previous doc
  3. References the design mockups and their annotations

It then drafts a complete product spec including:

  • Feature overview
  • Success metrics (directly tied to the 20% growth target)
  • Technical requirements
  • Timeline recommendations
  • Revenue impact projections (pulled right from your spreadsheet)

You review it section by section, making tweaks and giving feedback like: “The timeline is too aggressive here,” or “Add more detail about the user flow.” The assistant refines its work based on your guidance.

When you’re satisfied, you accept the changes - just like in Cursor Composer - and your polished spec is ready to share with stakeholders, complete with all the right references and metrics aligned to your company’s new growth targets.

Google is uniquely positioned to build this future for several key reasons:

  1. Complete product ecosystem - They already own the full suite of workplace tools where most knowledge work happens
  2. Best-in-class search - Google’s core competency is finding relevant information, which is essential for agents to locate the right documents to modify
  3. Gemini’s massive context window - Unlike competitors, Google’s Gemini can hold multiple large documents in context simultaneously, allowing it to make coherent, cross-document updates. No other model comes close
  4. Native integration potential - No “duct tape” solutions needed - Google can build agents directly into their existing products. Their RAG flow will always be better because they have access to all your relevant documents
  5. Treasure trove of user work data - Google already has access to how you work, collaborate, and use their tools. This data is gold for building agents that adapt to your specific workflow patterns. Sure, you could duct tape together various agentic solutions, but they won’t have this rich history of your work habits to draw from.

This gets even more powerful when you add MCP to the mix. This approach would allow the Google orchestration layer to extend beyond just Google products.

Your agent could use MCP servers to communicate with tools like:

  • Jira for project management
  • Asana for task tracking
  • Dozens of other workplace tools
  • Slack for status updates

All from that same unified interface. Imagine saying “Update our roadmap to reflect the 20% growth target and make the changes on Asana” and having it all happen automatically.

And for large enterprises with thousands of files? Only Google’s approach would scale. While having a huge context window is helpful, Google’s real superpower is their ability to index, search, and prioritize information at massive scale. Again, they can fetch the files relevant for a particular task. They don’t need new permissions because your data is already with them. Over time their model to predict which files are needed for the task only becomes better because millions of workplaces already use Google.

Furthermore, even if your organization has files spread across hundreds of teams and departments, Google’s infrastructure is already designed to handle exactly this kind of complexity. The search giant would excel at finding the needle in your organizational haystack and identifying the information that needs updating when new information comes in.

For knowledge workers, this represents a fundamental shift in how we spend our time:

  • Less manual information movement between tools
  • Reduced busywork updating documents
  • More time for creative thinking and strategic decision-making
  • Fewer things falling through the cracks

In essence, the agent becomes your second brain - remembering context, suggesting actions, and handling routine tasks while you focus on the work that truly requires human judgment.

I believe we’re moving toward a world where our digital work life isn’t spread across dozens of disconnected apps. Instead, we’ll operate from an orchestration layer that gives us a cohesive view of our work while agents handle much of the manual labor behind the scenes.

Google, with its integrated workspace, massive context windows, and search capabilities, has everything needed to make this vision a reality. The question isn’t if this will happen, but when - and which companies will be quick enough to adapt.

As these agentic systems mature, the productivity gains will be enormous. The companies that embrace this shift earliest will have a significant competitive advantage. And based on what I’m seeing, Google is perfectly positioned to lead the way.

P.S Yes, Microsoft can build this too. Eventually OpenAI might also figure out long context, and Microsoft has a product suite like Google. Since I have always used Google suite, this was more about how I see my workflow evolving.

Best time to be a growth PM

2025-03-09 08:00:00

There’s never been a better time to be a growth PM for a big company. Every day, I come up with dozens of top-of-the-funnel mini-product ideas. Great for user acquisition, but not quite full-fledged startup material.

A decade ago, I used to get inbound offers to build these for existing companies. Think: a credit score checker as a lead gen tool for a lending app. Back then, the cost of building such products was still high. I was not sure if leadership would be invested in the long term. How many bets would they take? What if they shut down the team after one failed launch? Now with Gen AI, the cost is almost zero.

Actually, it is not just top of the funnel. You can do products for engagement too. Spotify wrapped for other products. Mini-gamified surveys. Endless possibilities.

Low trust society

2025-03-08 08:00:00

India is a low trust society.

But I have seen some progress over time.

Earlier, after each Unified Payments Interface (UPI) payment, merchants would check your phone to verify. Nowadays, I rarely see auto-rickshaw drivers or merchants wait to verify. They move on to the next customer. Yes, UPI payment confirmation sounds are common. But in auto-rickshaws and similar settings, they don’t have that. It is just that they assume you would not cheat them.

But for every positive step, we have issues like the food adulteration scandals that go viral and make people doubt whether they can trust by default.

Websites have trust markers. They display logos of partners. They show logos of their venture capitalists (VCs).

But I was thinking about what are proxies for offline merchants.

Most local eateries have open kitchens. You know your food is prepared fresh. You can see it made in sanitary conditions. You watch it being prepared in front of you. The fact that the restaurant has spoons in hot water makes me trust them more. If they care about the spoons, they probably care about the ingredients too.