LLM in, but what is coming out? The strategic implication of your “AI fluency”

AI fluency is becoming a requirement, but do we really understand how to best use LLMs, and the strategic implications of doing so?

Let’s be honest: lately all we talk about is AI. From CEOs requiring AI fluency to Apple’s paper on LRM (Large Reasoning Models), em dashes in every text, or the next wave of redundancy. The AI revolution is definitely upon us.

But in all this talk and rush to exploit LLMs, I think we’re not talking about the elephant in the room.

We want everyone to work with AI, set requirements on AI, ask for AI fluency. But how many take the time to understand what is really behind those “magic black boxes” that make our lives so efficient?

I find it quite fascinating that we have such blind trust in these models, without (dare I say) taking the time to understand how they work. Where else would we do that?
We would never give a team a problem to solve and accept that they launch without questioning anything. We would never deploy a strategy without doing our homework. But with LLMs, we’re forgetting it all, and just accepting the results.

We accept the “black box” and want only the magic coming out of it. Isn’t it weird?

I see many CEOs talking right and left about what they expect from their employees, advocating for “AI first” and setting new standards and expectations. But I would love to understand: what is their own AI fluency when it comes to the long-term consequences of their actions?

To be clear: I’m not arguing against AI. Quite the contrary. But if you’re serious about implementing AI in your organization (or even in your daily work) the very first step you have to take is understanding how it works.

Because if you don’t know the input, how can you expect to have any real control over the output?

Start by understanding the “magic” behind LLMs

I’m sure many of you reading this article are using LLMs in some way. Maybe you are even summarizing it with LLM without reading 😱 But how many of you have actually taken the time to understand how these models work? Be honest.

I didn’t either in the beginning, until I started to see the expectations being put on AI. Then I figured: if we want to use the tool, we need to understand the tool. And please don’t try to “understand” by reading proxies like “I’ve read it so you don’t have to.” Or even worse using a LLM summary, so meta. Do the real work. Put in the effort.

You can start with the now-famous Apple paper, where they talk about LRM and ask the question: do these reasoning models really reason? I read the paper (and again, you should too!) and I’d say there was nothing extremely revolutionary in it, if you already know how LLMs work.

A few key takeaways for the ones not familiar with LLM or LRM. These models

  • Don’t think—they give back information based on what is statistically likely.

  • Don’t learn—they are pre-trained and don’t evolve from your interactions.

  • Don’t feel—you might think your GPT knows you, but there’s no empathy in there. Just math, based on the style you prefer and the input you give.

The real problem comes when we don’t stop and reflect on what this means, and instead assume these models are our “Einstein in the basement,” as Henrik Kniberg put it. A genius that thinks with (or for) us.

What the Apple paper does well is make it clear that these models don’t think. They are not geniuses. They hold lots of information and serve back the most likely next thing. I think about it as a Wikipedia on steroids, not as a genius that can challenge the status quo as Einstein was.

Remember: it’s a machine we’re talking about. A deterministic machine that doesn’t create, and doesn’t think.

Want proof? Ever wonder why an LLM can’t tell how many R’s are in “strawberry”? It’s been brought up before, how LLMs can solve complex questions but stumble over something so basic. A plausible explanation is that the model leans on associations like the “berry” part of the word, hence the “2 R” answer.

So if you understand how they work, you also understand that they’re designed to give you the impression they’re thinking. But they’re not. So why does this matter?

LLM in, what is coming out?

Understanding all this matters because it helps you leverage the models in the best possible way. Getting to real AI fluency with intent.

The way I think about use cases is tied to this perspective. We need to equip people and organizations with the right mindset to ask:

  • What are we trying to do?

  • What problem are we solving?

  • Will LLMs help us get there faster, or just distract us?

Some use cases to reflect on

1. Gathering and processing information

We’ve learned that LLMs are like Wikipedia on steroids. Amazing for getting faster to sources and pulling them together. 

Examples:

  • Saving time with meeting notes. I am a BIIIG fan of this.

  • Getting fast answers to fact-based questions: data analysis, market research, competitor insights (publicly available). But do remember, apply critical thinking on what you read, do not take it for granted

  • Summarizing documents

  • Helping you reflect: try voice notes, then check how clear (or unclear) your point is based on what the model gives back

Here’s the key: the model can give you facts. But you have to ask the right questions and apply critical thinking to the answers.

2. Strategy

I hear a lot of people saying strategy is easier now because of LLMs. And oooh, I so disagree.

I’ll skip over the argument that strategy is not a document but a process that requires buy-in (though I’d argue that’s already enough to disqualify LLMs). Let’s go deeper.

What makes strategy?

  • Knowing your customers: LLMs know what’s online. They might help build personas (which I’ve rarely seen work in real life), but they can’t uncover the hidden pain you might hear in a 1:1 conversation. They cannot follow the emotions as you can.

  • Knowing your market: They can surface competitor info. If it’s published. And if they are considered as direct competitors. But what about the companies they you could anticipate becoming competitors?

  • Knowing your edge: Understanding what really differentiates your company takes deep internal knowledge. How the company operates. How people collaborate. Is a model going to help with that?

  • Testing ideas: Here LLMs can help, once you’ve done the thinking. They’re great for prototyping fast. But you still need to know what problem you’re solving and why your company is best positioned to solve it.

  • Executing: Good luck using LLMs here. Execution is about collaboration. But maybe you can use LLMs to free up time elsewhere and reinvest that time in research and delivery.

So what can we learn?

If you’re a leader working with strategy, remember: the output of LLMs is available to everyone. Building is cheaper. Information is democratized. Things move fast. So your role is to understand where to double down and build your competitive edge.


Business-wise, I’d be crystal clear on what’s hard to copy: your brand, your customers, your distribution. Internally, I’d make space for people to build their thinking muscles. Spot insights beyond patterns. Anticipate competitors. Execute well. That’s your moat. Tell them what you want to achieve, do not give a ChatGPT answer to all your problems.

3. Mundane tasks

This is the holy grail for many: using LLMs to save time on boring stuff: emails, product updates, summarizing long decks or documents.

By all means, if something is time-consuming, has low brain power involved, and has low ROI, automate it! But remember: every time you delegate to the model, you might be neglecting an important skill.

For me, that was writing emails in Swedish. It’s my third language, so it was easy to jot things in English and have the LLM translate and polish. Saved time? Yes. But very quickly, I noticed I was losing my feel for the language. I became dependent on the LLM. That skill started to fade.

Now I force myself to write the first version. I only use the model for grammar tweaks.

It’s a simple example but I think it illustrates the point: some tasks take time for a reason. That effort might be worth investing in. Be intentional about what you delegate and think short, medium, and long term.

To sum it up

LLMs and LRMs are tools. The pressure to use them is growing fast. This is exactly when, as leaders and individuals, we need to take the time to understand what we’re working with and be intentional in our choices.

If you made it this far: thank you for reading, and for joining me on this reflection about LLMs and their strategic implications. And just to be clear: I’m not a data scientist or an LLM expert. What I’ve written above is based on my research, my experience, and a good dose of curiosity.

Here are some resources I used to inform my thoughts. If you have others—send them my way. And remember: always stay curious.

Resources

Next
Next

How to think about strategy in the AI era