Blog post preview

[PM Talks] AI in the Product Manager's Toolbox with Ivana Ciric (Thoughtworks)

AI is introducing rapid changes to nearly every industry. We're seeing shifts not only in what users want and what companies build for them, but also in how we think and work. How is AI changing the way product managers do their jobs? What tools should be in their toolbox to stay competitive in the job market?

We explored these questions with Ivana Ciric, Principal Product Manager at Thoughtworks. Ivana brings her experience working with Fortune 100 companies, and shares her thoughts on how PMs should equip themselves for the age of AI.

As she says, AI is the future — and now is the time to invest and experiment. So grab a coffee and enjoy today’s episode.

About Ivana Ciric:

  • Principal Product Manager at Thoughtworks
  • Experience working with Fortune 100 companies
  • Follow Ivana on Linkedin

How has product development evolved with the rise of generative AI and AI agents? What changes have you observed in recent years—and especially in the last few months?

I mean, it's interesting—and this will probably come up throughout the conversation—but there’s the rise of GenAI, and then there’s everything else happening in the world right now: post-COVID, the boom, the subsequent crash, the state of the economy. And sometimes, we can't really disentangle those broader shifts from what's happening in product development.

Broadly speaking—not just in terms of product managers, but product development as a whole—we’re still working across what we often refer to as two areas. Some people call them discovery and delivery. We tend to think of them as: first, figuring out what to build—so building the right thing—and then figuring out how to build it right.

That includes questions like: how do we get the right team in place? How do we plan for it? How do we execute as quickly as possible to get the best product out into the market?

Now, I know we're focusing on tooling. So how has generative AI—and AI agents—helped with product development?

We're seeing this across almost every aspect of the process. We now have tools that help us both build the right thing and build the thing right. For example, in ideation, we have all kinds of LLMs we can use from the earliest stages. For research, we have amazing copilots and coding tools that assist with writing actual code—on the engineering side, there’s been massive progress.

We’ve also seen advances in product support and customer success. These AI tools now support nearly every stage: ideation, strategy, prototyping, coding, testing, releasing, and even post-launch support and product evolution.

Take Shopify, for example—they recently launched a tool that fundamentally changes how they support customers. It allows them to scale in ways they couldn’t before. So there are huge opportunities here to improve productivity and scalability. And we’ll come back to that, because there’s more to gain than just productivity.

What we’re seeing now is just the first stage: using AI tools to enhance our current methods. But the next stage is about shifting how we work altogether.

Do we even need the same processes we’ve relied on? For instance, people are using LLMs to write PRDs—there are tools specifically designed for that. But do we even need PRDs in this new context?

We have to start asking: are these traditional techniques and methods still valuable in this new way of building products?

So, just a few things I’m observing right now in the market. I’m sure we’ll dive deeper into them later.

Before we dive into specific tools and use cases, let’s take a step back. What hasn't changed in product development since the introduction of AI tools? What are the things you believe will remain the same—where AI simply can’t replace or fill the gap?

Never say never. I've seen AI tools improve dramatically—even just in the past few months. We're constantly seeing new evolutions.

Take prompting, for example. At first, it was just a blank text field—you had to figure out what to type in. But now, many tools suggest prompts or offer entire prompting libraries. So, if you thought prompting would be a long-term competitive advantage, that’s already shifting. There are now plenty of public resources available to help you get better at it.

So I hesitate to say never. However, there are still some things that haven’t changed.

At the core of product development, we still need to build something that’s valuable to users and that supports the business. And depending on the stage we’re in, we may focus more on one than the other. That fundamental flow—what to build, and then how to build it—still holds true today.

One skill I’ve been thinking about a lot lately is the ability to clearly describe what you want. Whether you’re a one-person startup or an engineer doing what’s been recently called “vibe coding,” it's crucial—especially when using generative AI or agents—to articulate exactly what you’re aiming for.

There are just so many possibilities. Even in traditional product development, misunderstandings were common—during handoffs or team collaboration. That hasn’t changed. You still need to express clearly what you want to build: the vision, the strategy, and how you expect your users and customers to benefit from it.

That skill remains essential. Maybe the mechanisms change—we may no longer write user stories or PRDs in the same way—but even when working with AI tools, you still need to describe things well and iterate from there.

Another thing I’m seeing with clients: despite how long I’ve worked in this space, many still use traditional waterfall approaches—even in modern tech companies. Others use agile, SAFe, or dual-track models. There’s so much variety in how teams approach product.

And I don’t see that going away. You have to use what aligns with your company culture, your team’s skill sets, your industry, and the nature of your product. We’re still going to see a wide range of approaches to building products. It’s still incredibly important to adapt your product practices and tooling to fit your specific context—and that hasn’t changed.

In these turbulent times, what would you recommend fellow product managers focus on? What skills should they be learning or strengthening right now?

Well, I mentioned one already. Another that comes to mind is how much focus we put on the end tools that are using all our data and information—yet we often overlook how important it is to have clarity around our methods. The approaches we take and our internal knowledge—whether it’s about our company or our methodology—really matter.

I’ll give you an example that’s close to me. I’ve worked on developing our internal Product Thinking Playbook at ThoughtWorks. It’s not just a deck of cards—it’s a repository of all the methods we use, with examples, best practices, and things to consider.

Codifying that kind of knowledge is like codifying culture. It defines what sets us apart as an organization. Having something like that, clearly articulated, ties back to the importance of communication—which is such a key factor when working with AI tools.

The same goes for data. We work with many organizations, including those in life sciences and oil and gas, that are still figuring out how to gather and make sense of all the structured and unstructured data they’ve collected over decades. Often, people jump straight into AI without first gaining clarity on their data and internal processes—but there’s so much value in getting that foundation right.

Another important point is simply experimenting with these tools. I’ve done a lot of that myself, and I’ve gotten stuck and frustrated at times. But, as I mentioned, the tools are evolving so rapidly that the more we experiment now, the better prepared we’ll be to use them effectively as they mature. That doesn’t mean it has to become a full-time job—but just experimenting is one of the best things you can do at the moment.

So, to summarize: communication is key, as is creating repeatable artifacts—methods and tools that AI can leverage. Then comes hands-on experimentation—figuring out what the tools are good at, where they fall short, and where you might need to develop new skills, whether it’s prompting, evaluations, or other emerging practices. That’s important whether you're building AI products or simply using AI tools.

And I’ll add one more thing. If you look at the industry and what employers are prioritizing, there’s a strong emphasis on specialized skills. Whether you plan to stay in your current industry or move around, it's essential—for both your professional development and employability—to go deep. Learn specific skills, develop domain expertise. AI isn’t just a new way to add features or build products—it’s becoming deeply embedded in how value is created. And to build real value for your customers and business, you need to understand your industry in depth and know how to apply AI meaningfully within that context.

Do you provide the AI tools with access to that playbook so they can operate based on its content?

Absolutely. We’ve actually built an AI tool internally that incorporates our playbook. It helps us access and apply the playbook more easily—but we also continue to use the playbook manually.

On the backend, it’s an Airtable repository, and it’s very well organized. We’ve included fields like whether a practice is remote-first, which is important given how many companies are now developing products in distributed environments.

There’s also a description of each method, details on who the subject matter experts are—so you know who to go to—and examples of clients and products we’ve worked on that demonstrate the technique in action.

So yes, we’ve fed all of that into our AI tool, but we still actively use the traditional version as well when we’re working on product development.

How do you onboard new product managers who aren't familiar with these AI tools? Is it all in the playbook, or do you provide specific training?

Yeah, so the playbook itself—while it does mention tools that are appropriate for specific techniques—doesn’t necessarily endorse any of them. It’s designed to be a flexible framework that helps you think about your thinking, in a way.

Take user story mapping, for example. There are many tools you can use for it, and with AI, this has evolved too. You can now sit with an LLM and go through the user story mapping process together. So our focus is less on the tools themselves and more on the knowledge and the practice behind them.

When someone new joins our team, we have multiple internal training courses, including one specifically for the playbook. It covers several of the methods, but more importantly, it teaches how to approach this often nebulous space of product development.

Every company you work with will have its own version—some still use SAFe, some use waterfall. So the key is knowing how to integrate best practices in a way that’s adaptable, without being overly dependent on any single tool.

Within the generative AI space specifically, we also have internal self-paced training courses that our Thoughtworkers have built. Anyone can go into our learning and development portal, access the content, and experiment with the tools.

One of my favorite aspects of being at Thoughtworks is how much people share what they’re working on. We have weekly demos where folks present new AI tools they’ve developed, how clients are using them, or simply something new they’ve learned.

So we’ve developed a strong curriculum around GenAI, but we also have a culture that encourages low-stakes experimentation. People are free to share and explore without any performance pressure. It’s more like: “Hey, I did this cool thing—let me show you.”

Fostering a culture that supports experimentation is really important. But beyond that, you also need to invest in providing the tools, offering training, and making it exciting—showing what’s possible, and inspiring curiosity about the art of the possible.

If you were the first product manager joining a small startup today, which AI tools would you bring in right away—and why those specifically?

That’s a great question—because honestly, it really depends on the stage of the company and the expertise available.

Let’s say there’s no generative AI tooling in place yet. I’ll focus on product management, but I’ll also assume that some form of coding assistance is already being used—because that’s an area where we’ve seen tremendous improvement and benefit. Many engineers love coding assistants, and there are so many resources and examples out there to support their adoption.

But whenever you bring in new tools, it’s also crucial to invest in your practices and craft. From the product management side, especially in the early stages, topics like privacy, security, and intellectual property often enter the conversation. Maybe these are more relevant further down the road—but they’re still important considerations.

So, if I were the first product person on the team, I’d start by using LLMs for ideation—generating divergent ideas and helping structure my thinking and planning.

Then I’d look at where the team needs support. For instance, maybe we have plenty of engineers who are great at what they do, but they don’t have time to document their work. In that case, we can use AI to generate documentation or explain the code, helping product managers understand what’s happening without taking more time from engineering.

If the team is strong in one area but struggling in another, that’s where AI tools can provide real leverage. So rather than jumping to advanced or niche tools too early, I’d first assess where the gaps are—what’s going well, and what’s not—and choose tools accordingly.

That said, I’ve worked across many industries, and there really isn’t a one-size-fits-all answer. But I’m especially excited about using generative AI for prototyping.

I see a lot of startups doing this now—getting to a version zero quickly. There’s even a tool called v0 that lets you generate high-fidelity prototypes from just a prompt or two. Whether you’re a designer, an engineer, or a PM, you can express your idea at a much higher level of fidelity than was possible before—all on your own.

So those are a few things that come to mind when I think about starting out and selecting the first AI tools to bring into the team.

Do you have a preferred model—like ChatGPT, Claude, or Gemini—or are they largely interchangeable? And are there tools tailored specifically for product managers, or is it more about how you prompt the model and what context you give it?

And again, this brings us back to the idea of "never say never"—especially when it comes to choosing tools. It’s nearly impossible to keep up with all the new releases happening right now. What’s clear, though, is that each of these LLMs and tools continues to evolve rapidly.

As of now, I’m not aware of anything significantly differentiating one LLM over another specifically for product management use cases. So I wouldn’t say, for example, “Definitely use Gemini because it’s far better than the rest.” That’s just not something I’ve seen yet.

This is why it’s so important to experiment within the parameters you have. Depending on your company’s privacy, security, or legal requirements, you might be limited to using a specific tool. And of course, you should always be careful not to input any sensitive information into any of these systems—security still comes first.

But in terms of standout features that clearly make one model better than the others for product management, I haven’t seen a definitive case yet. So really, it comes down to experimenting responsibly with the tools available to you.

What are some common mistakes you’ve seen when adopting AI tools in product development? You mentioned compliance and regulatory concerns—are there other pitfalls product managers should watch out for?

The first thing that comes to mind is how much focus there is right now on productivity. I’ve worked with companies that say things like, “This coding task used to take five minutes, and now it only takes one with GenAI—so we’re seeing huge productivity gains.”

And yes, it’s important to make a business case for any tool you adopt, including AI tools. But we’re in a phase where it’s not so easy to measure impact in strict, minute-by-minute terms. It’s much more nuanced.

What’s more important is acknowledging that we now need to learn a completely different set of skills—and it’s in our best interest to invest in that learning, even if we can’t yet quantify exactly what productivity gains we’ll see.

When productivity is measured, people often cherry-pick the data that supports their point. And while I’ve definitely seen productivity improvements myself—both in my own workflows and among engineering and design practitioners I work with—it’s crucial to think bigger. It’s not just about doing things faster. What have we learned that allows us to work differently? And what can we now do with AI that simply wasn’t possible before?

If I think back to past technology shifts, email is a great analogy. When email first came out, we thought it would make our lives easier. Instant communication! No more miscommunication! More efficiency! But what happened was quite the opposite. Email—and now Slack and other tools—actually take up a huge portion of our days. Being able to communicate instantly has, in some ways, eroded our ability to communicate clearly and thoughtfully.

So, what will happen with AI tools? I’ve spent a lot of time researching digital wellbeing and how technology affects our brains. I think it’s worth asking ourselves: now that I’ve been using LLMs or GenAI tools for a while—whether it’s a few months or a couple of years—how has my work actually changed? Could I go back to how I worked pre-LLMs? What does my thinking process look like now? Has it improved? Has it changed the way I approach my work fundamentally?

It’s not just about how our roles are changing, but how we are changing because of these tools.

Another big one: not checking the output of AI. It’s so easy to get comfortable and trust that the tools are giving you better and better results. And sure, in low-stakes scenarios—like ideation—maybe that’s fine. But when you’re releasing code to production, or influencing not just users but entire communities, you have to check the output. You need guardrails in place. At a very basic level, quality control still matters.

And maybe I’ll end on this: one side effect I’m seeing a lot is that people are now able to generate much more—whether that’s prototypes, documentation, or comms. You can spin up 5, 6, even 10 different versions of something instantly. But then someone—you or a colleague—has to review all of it. And oftentimes, the options aren’t meaningfully different. So we’re at risk of creating more, but not necessarily better.

We have to be mindful of that. The goal shouldn’t be volume—it should be clarity, quality, and usefulness.

There’s a lot that goes into that, whether it’s prompting effectively, using the tool with intention, double-checking the output, or ensuring there’s a human in the loop. Taking the time to really understand how to work with these tools is super important.

How do larger organizations—like the Fortune 100 companies you've worked with—approach adopting new AI tools? How flexible are they compared to startups, and what does implementation typically look like in those environments?

One encouraging thing I’ve seen is that even the most restricted, conservative organizations have found a way to invest in AI tooling for their workforce. Larger providers, in particular, are able to offer certain protections—whether legal, security, or privacy-related—which gives their customers more confidence to begin experimenting.

So, we’re seeing even the most cautious companies start small: launching internal LLMs, building their own prototypes, and experimenting with non-sensitive use cases. Across the board, organizations are making investments in this space.

In large companies, though, simply knowing that AI is the future isn’t enough. As product developers, managers, and engineers, there’s often a need to justify investment—and that’s where we sometimes get stuck. The ROI calculations for AI tools can be vague, even shaky. It’s difficult to put a clear number on how much improvement to expect.

One approach I’ve seen work really well—regardless of whether you’re introducing a new technology, process, or methodology—is to start with a small, focused team. Give them a specific problem to solve, and shield them from unnecessary external dependencies. Let them go deep, find a solution, and then share their learnings as widely as possible across the organization.

Of course, top-down support is also critical. Having someone in senior leadership actively championing and investing in AI adoption can make a huge difference. But people are naturally afraid of change—and in large companies, culture doesn’t just shift on its own.

If you're trying to drive transformation, you need to work with both ends of the spectrum: those who are enthusiastic and those who are cautiously evaluating the future.

And across organizations of any size, there’s often fear around job security. People may worry that if they start using GenAI now, their role could become obsolete—that the company will see AI as a replacement. I haven’t personally seen this happen yet, but the fear is real. That’s why building trust within teams is essential.

Start with your smallest functional unit—your product trio, product squad, or product team—and focus on fostering trust. Create an environment where people can experiment openly and safely. That’s what leads to real adoption.

It’s also important to articulate a vision for the future—showing individuals not just where AI is going, but how their roles will evolve alongside it. This applies to smaller companies too, though in startups, it tends to be easier. People know each other better, they’re used to working closely together, and there’s often a stronger sense of unified culture.

But those are a few of the things I’ve observed when it comes to rolling out AI tools—particularly at scale and in larger, more complex organizations.

You mentioned Shopify—any other new AI developments worth checking out?

Oh, it's so hard to pick just one thing—because honestly, the pace of progress is absolutely mind-blowing. Just keeping up with all the new models being released, what’s happening with AI agents, and the emerging capabilities around things like multi-modal control and planning (MCP), it’s a lot.

Personally, I’m really inspired by the code-generating tools—the kind where you just type in a prompt and suddenly you have a fully working prototype. That’s incredible.

Right now, my focus is on experimenting more with those tools, especially as AI agents continue to mature. That’s the space I’m most excited to explore further, and I think it’s going to be extremely relevant for the clients I work with on a day-to-day basis.

On the flip side, is there anything you think is overhyped in AI for product management right now?

There's been a lot of hype around AI replacing product managers entirely—but we're not there yet. Some companies even require you to justify why a new PM or engineer can’t be replaced by AI. While I get the push to explore AI’s potential, there’s still so much organizational context and nuance that AI can’t fully grasp—at least not yet.

Never say never—like you said earlier. To wrap up, do you have a favorite book, tool, or practice you'd recommend to other product managers?

I do love to read, so I always have tons of product books I could recommend—depending on who I’m speaking to. But one area I’m particularly interested in is systems thinking. It’s been around for decades; it’s not new.

One book I read recently that really stood out is Drift Into Failure by Sidney Dekker. I’d encourage anyone working in product development to dive deeper into systems thinking.

As we add generative AI tools and agents, we’re not just increasing levels of abstraction—we’re also adding significant complexity to the systems we’re building. And having the ability to think from a systems perspective is no longer just a nice-to-have—it’s absolutely essential.

There are so many first-order, second-order, and even third-order effects in the work we’re doing now. Having a solid framework and understanding of these concepts will make you a much better product builder. So yes, I’d recommend that book—but really, any resource that helps you get up to speed on systems thinking is worth the time.

Let's get to our favorite fives: questions we ask every product manager who joins us on this podcast. First, what is your biggest challenge as a product manager?

It really depends on who I’m working with, but internally, one of my biggest challenges is something we often talk about—creating repeatable methods in the midst of such rapid change.

I mentioned the playbook earlier—we have our tried-and-tested product development methods. But now the challenge is evolving those methods to ensure they still work in an AI-driven world. That also means creating entirely new ones: methods for building AI products and for using AI tools effectively.

So, staying on top of what’s happening in the industry, evolving our practices, and continuing to grow the craft—that’s something I’m deeply excited about. But it’s also a huge challenge.

Then again, that’s exactly why we’re here, right? So that’s definitely one of the big internal challenges I’m focused on.

If you could choose one key metric, your North Star, to define your success as a product leader, what would that be?

When I think about product leadership, I think about scaling myself through my team. So whatever impact my team has had—on the success of our customers or the success of the business—that's where I look to measure success.

You can disentangle the two, of course, and impact can be measured in many different ways. But as a leader, my own success is reflected in how well my teams are doing.

I also believe that if you're helping your customers succeed, your business has a strong potential to succeed as well. It’s not always a one-to-one correlation, but often those outcomes go hand in hand. So yes, the impact my teams have had—that’s what matters most to me.

How do you collect feedback from your customers? What processes or tools do you use?

Whenever I work with a new client, I adapt to the tools they're using. As someone who works with many different companies, it really depends on the context. But when I think about my clients and the companies I work with, sometimes the most effective thing is just going back to the basics—like sitting down over a coffee and having a direct, open conversation.

One of my favorite ways to collect feedback is still retrospectives. They haven’t lost their relevance, even in today’s fast-paced world. And I think we sometimes forget how important they are for building trust.

When you sit down with someone and have a genuine conversation, they feel heard—and that deepens trust. It also helps you get beneath the surface and truly understand what matters most.

So for me, whether it’s face-to-face or at least synchronous, those human touchpoints remain incredibly valuable—regardless of which product I’m building. Yes, automated tools can help you collect more data, but that human element still carries a lot of weight.

When it comes to 'build versus buy', how do you decide?

I’ve seen this across many clients: when something is a core differentiator for your company, that’s where you should focus on building.

But often, companies spend time building things that aren’t core—features that are necessary, but not unique. Whether it’s security tooling or something else entirely.

For example, I once worked on retail software for consumer electronics. The core product was the main focus, but we also needed to make it work in physical stores. Yes, people still shop in physical stores!

So for that in-store version, maybe it made more sense to buy a service or an off-the-shelf product instead of using precious engineering time to build it from scratch.

The key is knowing what truly differentiates your product. That’s where you go deep.

The other situation where companies often choose to build is at scale. Larger tech companies will sometimes build their own internal tools to maintain control—often for reasons like privacy, security, or competitive advantage.

So those are just a few of the factors I see when companies are making the build-versus-buy decision.

What role do AI and AI agents play in your product strategy? Could you share a specific use case you’re targeting with AI to help your users?

I don’t think I mentioned these two earlier, but there are some really interesting examples we've worked on.

One is in preclinical drug discovery. In that space, the first challenge is always making sure you can actually access and understand your data—before you even begin using it in something like a chatbot. Once you do, you can start extracting valuable insights that would have been impossible to surface otherwise.

I’m seeing a similar pattern in the oil and gas sector. There’s a huge amount of technical data—much of it highly unstructured and some dating back decades—that's just sitting there. It exists, but it’s not usable. So being able to unlock that data and layer an LLM on top has been genuinely transformative for those businesses.

Another fun and exciting use case has been personalization in the fitness space. I can’t share the name of the client, but I can say they’re quite large. There’s already so much activity in fitness and wellness, and AI is becoming a powerful way for companies in that space to scale their impact—delivering more tailored and engaging experiences to users.

So those are just a few of the different industries we’re working with in the AI space.

Of course, we’ve also talked about Thoughtworks and our Product Thinking Playbook. AI plays a big role there too—extending our ability to help clients by scaling and codifying the knowledge and culture we’ve built. It’s one of our three key focus areas alongside design and engineering.

What’s most exciting to me is going beyond just “let’s use AI tools” or “let’s build a chatbot.” It’s really about how AI transforms the way we think, and the way we build products. That shift is still evolving, and it’s an incredibly exciting space to be part of right now.

Closing remarks

AI is changing the way we build products—but also the way we think about product management itself. As Ivana shared, the key is to invest in your craft, experiment often, and stay curious. Follow us for more conversations with product leaders navigating this shift.

You can contribute to the topics we prepare for our PM talks. Post your desired question or topic below, and we’ll happily consider it for future episodes!

You’ve just read an interview from our podcast, where we speak with product leaders who share their experiences. Follow us on Spotify or YouTube for more episodes.

Authors
Blog post author
Jiri Novacek
Tech enthusiast, sales professional, and podcast host.
Stay in the loop