Home
Blog
I Was Early to AI in 2018—Here's What I Learned

I Was Early to AI in 2018—Here's What I Learned

I Was Early to AI in 2018—Here's What I Learned

TL;DR: In 2018, I shipped AI-generated audio news summaries and personalized push notifications at the Netherlands’ largest news platform. The features worked. The lessons I learned then—start small, don’t oversell, let AI augment rather than replace—are the same lessons people are relearning now. Being early didn’t make me a prophet. It gave me pattern recognition that helps me ship faster today.

What did AI look like in 2018?

Different enough that most people ignored it. Similar enough that the core lessons still apply.

In 2018, I was a PM at NU.nl, the Netherlands’ most-used news platform (8M+ monthly active users). We had two AI initiatives running: personalized push notifications and AI-generated audio news summaries. Both worked. Neither made headlines.

The AI we used wasn’t ChatGPT. It wasn’t generative in the way we think about it now. But it was intelligent enough to personalize content, generate audio from text, and learn from user behavior. The tech was clunkier, the vocabulary was different (“machine learning” more than “AI”), but the principles were the same.

Looking back, here’s what I learned that still holds.

Why does being early matter?

Because iteration beats theory. You build intuition you can’t get from demos.

When the AI boom hit in 2022-2023, I noticed something: people who had shipped AI features before moved faster than those who hadn’t. Not because they understood the tech better, but because they’d already made the obvious mistakes.

They knew AI products overpromise. They knew users have lower tolerance for AI errors than human errors. They knew that the gap between “demo” and “production” is massive.

I knew these things because I’d learned them the hard way in 2018. That intuition let me ship AI Writer to 400K monthly users at Picsart in 2022 while others were still debating if AI content would work.

The intuition advantage

Being early gives you: 1. Pattern recognition — You’ve seen what works before the best practices exist 2. Mistake immunity — You’ve already made the beginner errors 3. Healthy skepticism — You know what AI can’t do, not just what it can 4. Shipping confidence — You’ve seen the gap between prototype and product

These compound. By the time everyone else is experimenting, you’re optimizing.

What did we actually build at NU.nl in 2018?

Two things: personalized push notifications and AI-generated audio news.

Personalized push notifications

The challenge: NU.nl sends push notifications for breaking news. But “breaking” to a sports fan is different from “breaking” to a politics reader. We were sending the same notifications to everyone—which meant either too few (missing relevant stories) or too many (annoying users).

The solution: use behavioral data to personalize notification relevance. The AI learned which topics each user engaged with and weighted notification importance accordingly.

The result: we won the DDMA AI Hackathon with the concept, then rolled it out to the full 8M user base. App opens increased 5%. Retention improved 3%.

AI-generated audio news

The challenge: users wanted to consume news while commuting, exercising, or cooking—situations where reading is impractical. Hiring voice actors to record every article was impossible at our publishing velocity.

The solution: AI-generated audio summaries. We synthesized daily news digests using text-to-speech technology that sounded good enough to not be distracting.

The result: 90K daily listeners within a few months. Users loved having a “morning briefing” they could listen to without looking at a screen.

What mistakes did we make?

Every mistake you’re probably making now. That’s the point.

Mistake 1: Overselling the AI

When we first pitched the personalization project internally, we framed it as “AI that knows what you want to read.” The editorial team pushed back hard. Rightfully.

What we were actually doing was much more modest: weighting notifications by topic affinity. We weren’t reading minds. We weren’t creating filter bubbles. We were just being slightly smarter about what we surfaced.

The lesson: undersell AI capabilities internally and externally. Users and stakeholders have inflated expectations. Ground them early.

Mistake 2: Ignoring edge cases until they became problems

Our text-to-speech handled Dutch fine, but struggled with proper nouns, English words mixed into Dutch text, and unusual punctuation. We’d get reports of the AI mispronouncing politicians’ names or reading URLs aloud character by character.

We should have caught these in testing. We didn’t test broadly enough.

The lesson: AI products need more edge case testing than traditional products. The failure modes are less predictable.

Mistake 3: Assuming AI would sell itself

We thought users would be excited that notifications were “powered by AI.” They weren’t. They just wanted relevant notifications. The AI was invisible infrastructure, not a feature.

The lesson: don’t lead with “AI-powered” in positioning. Lead with the outcome. AI is a means, not an end.

What principles from 2018 still hold in 2025?

Almost all of them, just at a different scale.

Principle 1: Start small, prove value, then expand

In 2018, we didn’t try to personalize the entire NU.nl experience. We started with push notifications—a contained scope with measurable outcomes. Once that worked, we earned the credibility to expand.

In 2025, the same principle applies. Don’t try to “AI-ify” your entire product. Pick one high-value, low-risk feature. Prove it works. Use that proof to get buy-in for bigger initiatives.

Principle 2: AI should augment, not replace

Our AI didn’t replace human editors. It helped them prioritize. It didn’t replace writers. It generated audio from their work. The humans stayed in the loop.

The best AI products in 2025 follow the same pattern. Copilots, not autopilots. Assistants, not replacements. Users trust AI more when they feel in control.

Principle 3: The gap between demo and production is massive

Demo: “Look, the AI generates perfect audio!” Production: “Why is it pronouncing ‘Schiphol’ as ‘Skeefol’?”

This gap exists in every AI product. LLMs today are more impressive than anything we had in 2018, but they still fail in production ways they don’t fail in demos. Plan for that gap.

Principle 4: Users blame AI more harshly than humans

When a human editor sends a bad push notification, users forgive it. When “the AI” sends a bad notification, users question the entire system.

Fair or not, this asymmetry is real. AI products need higher reliability than their human-operated equivalents, or users lose trust faster.

How did 2018 experience help me in 2022?

It let me ship AI Writer to 400K monthly users while others were still debating.

In early 2022, generative AI was emerging but not yet mainstream. Most companies were watching. At Picsart, we saw an opportunity: content creators needed copy (social posts, captions, descriptions) and would pay for tools that helped them generate it.

We built AI Writer—60 tools for different writing use cases. By the time ChatGPT launched in November 2022 and everyone rushed to add AI features, we already had a product with traction: 400K monthly active users generating 1.6M pieces of content per month.

What 2018 taught me that applied directly

  1. Don’t wait for perfect AI — We shipped with GPT-3 and improved as models got better
  2. Focus on workflow, not capability — Users don’t want an AI that writes. They want their caption done.
  3. Expect weird failures — We built moderation and fallbacks from day one
  4. Don’t lead with AI in marketing — We called it “AI Writer,” but the pitch was “get your copy done in seconds”
  5. Ship fast, fix in production — Perfect was the enemy of learning

None of these insights were revolutionary. They were pattern recognition from 2018, applied to 2022 conditions.

What’s different about AI in 2025?

Three things have fundamentally changed. Everything else is scale.

1. AI can now generate, not just classify

In 2018, AI was mostly about classification and prediction. This content is sports-related. This user is likely to click. The inputs and outputs were structured.

In 2025, AI generates novel outputs—text, images, video, code. This changes what’s possible but also what can go wrong. A classification error is a false positive. A generation error is a hallucination that looks plausible.

2. Users expect AI features

In 2018, adding AI was a differentiator. In 2025, lacking AI is a liability. User expectations have shifted. If your competitor has an AI assistant and you don’t, you’re not “classic”—you’re outdated.

3. The build/buy calculus has flipped

In 2018, building AI capabilities required significant ML expertise. In 2025, you can access frontier models via API. The hard part isn’t the AI—it’s integrating it into a product that works.

What hasn’t changed

  • Users still care about outcomes, not technology
  • AI still fails in unexpected ways
  • The gap between demo and production still trips up teams
  • Starting small still beats starting big
  • Augmentation still beats replacement

What would I tell a PM shipping their first AI feature?

Six things I wish someone had told me in 2018.

1. Ship something embarrassing first

Your first AI feature will be underwhelming. Ship it anyway. The feedback loop from real users is worth more than another sprint of polish.

2. Build the moderation layer on day one

AI will generate something bad. Content that’s offensive, factually wrong, or legally problematic. Don’t wait until it happens in production to figure out your response.

3. Measure user trust, not just usage

Users might engage with an AI feature while losing trust in your product overall. Track sentiment, not just click-through rates.

4. Communicate uncertainty clearly

When AI confidence is low, say so. Users handle “I’m not sure about this” better than confidently wrong answers.

5. Don’t chase the hype cycle

The AI feature of the week changes monthly. Build for your users’ actual problems, not what’s trending on Twitter.

6. Remember that AI is infrastructure

The best AI products make AI invisible. Users don’t want to “use AI.” They want their problem solved.

How has the 2018 experience shaped my work today?

It made me faster and more skeptical in useful ways.

When I evaluate an AI product or feature now, I have a mental checklist from 2018: - Does this solve a real problem, or is it AI for AI’s sake? - What happens when the AI fails? - How will users react to errors? - Is this augmenting or replacing? - What’s the gap between the demo and production?

These questions aren’t original. But having lived through the answers in 2018, I ask them reflexively. That reflex saves time.

Today, I build AI workflows daily using Claude, ChatGPT, and N8N automations. The tools are orders of magnitude more capable than what we had at NU.nl. But the human problems—trust, adoption, error handling, positioning—are identical.

Being early didn’t make me smarter. It just gave me a head start on making mistakes.

Key Takeaways

  • Being early to AI gives you pattern recognition that’s hard to acquire from theory
  • The principles from 2018—start small, augment don’t replace, undersell capabilities—still hold
  • The gap between AI demo and production is always larger than expected
  • Users forgive human errors more easily than AI errors—plan for asymmetric trust
  • AI should be invisible infrastructure, not a feature you lead with
  • Ship something imperfect early; the feedback loop is worth more than polish
  • The tech has changed dramatically; the human problems haven’t changed at all

FAQ

What AI tools did you use in 2018?

For personalization, we used in-house machine learning models trained on behavioral data. For audio, we used early text-to-speech APIs (the specific vendor has since been acquired). Neither was sophisticated by 2025 standards, but they got the job done.

Did the AI features at NU.nl survive?

Yes, both are still in production (as of my knowledge). The personalization evolved significantly, and the audio summaries improved as TTS technology advanced. The foundations we laid held up.

How do you balance AI skepticism with AI enthusiasm?

I’m skeptical of claims and enthusiastic about shipping. Every AI product I’ve seen overpromises in demos. But the only way to learn what works is to build, deploy, and iterate.

What’s the biggest AI mistake you see companies making in 2025?

Trying to AI-ify everything at once. Pick one feature, prove it works, then expand. The companies trying to add AI assistants, AI search, AI generation, and AI analytics simultaneously ship nothing well.

Do you think AI will replace PMs?

No. AI will replace parts of PM work—research synthesis, draft writing, data analysis. But the core PM job—deciding what to build and why—requires judgment that AI assists but doesn’t replace. The best PMs will be the ones who use AI to do their jobs faster, not the ones who resist it.

Reflections on shipping AI at NU.nl (2018) and Picsart (2022-present).

Say hi! 👋

I'm always up for talking product, growth, or AI workflows. Reach out if something here resonated.

Reach out on LinkedIn
Reach out on LinkedIn