Skip to main content
AI ProductivityReflection

I was early to AI in 2018—here's what I learned

Reflections on shipping AI-generated audio news at NU.nl before the current AI boom, and what being early taught me about AI adoption.

Niels KaspersNiels Kaspers
December 15, 2024
11 min read
I was early to AI in 2018—here's what I learned

TL;DR

In 2018, I shipped AI-generated audio news to 90K daily listeners. The key lesson: AI technology matters less than solving a real problem. Set appropriate expectations and iterate relentlessly on content curation.

In 2018, I shipped an AI-generated audio news product at NU.nl that reached 90,000 daily listeners. This was before GPT-3, before the AI hype cycle, before every startup had "AI-powered" in their tagline. Here's what that experience taught me about adopting emerging technology—and why most of it still applies today.

The Project

NU.nl is the largest news platform in the Netherlands—8M+ monthly users, the default homepage for a good chunk of the Dutch population. Think of it as the BBC News of the Netherlands, except everyone actually reads it.

The challenge: how do we reach users who don't have time to read? Commuters, parents getting kids ready in the morning, people at the gym. The news audience was there. The format wasn't.

The solution: AI-generated audio summaries of news articles, delivered as a daily podcast and available as on-demand audio on every article page.

The Team and Tech

We were a small team—a product manager (me), two engineers, a data scientist, and an editorial lead who kept us honest about journalism standards. Five people total, building something that didn't exist yet.

The tech stack in 2018 was nothing like today. There was no GPT-anything. We used:

  • Text-to-speech engines from Google Cloud and Amazon Polly for the voice generation
  • Custom NLP pipeline for article summarization—extracting the key sentences, rewriting for spoken cadence
  • Editorial rules engine we built ourselves to handle Dutch pronunciation quirks (try getting AI to say "Scheveningen" correctly)
  • Node.js backend that processed articles, generated audio, and pushed to podcast feeds
  • A/B testing framework on the website to test different audio player placements

The summarization was the hard part. TTS engines could read text aloud, but news articles are written for eyes, not ears. Long sentences with embedded clauses that work on screen sound terrible when spoken. We had to build a pipeline that restructured sentences for audio—shortening them, removing parenthetical asides, converting written numbers to spoken form.

The DDMA Hackathon

Before we got buy-in for the full project, we proved the concept at a DDMA hackathon (the Dutch trade association for data-driven marketing). Our pitch: "What if every news article had an audio version generated in real-time?"

We built a working prototype in 48 hours. It was rough—the voice was clearly robotic, the summarization occasionally mangled context, and we had to hardcode about 50 Dutch city names because the TTS kept butchering them. But it worked. You could open an article and hear a 60-second audio summary within 30 seconds of the article being published.

We won. The judges—a mix of media executives and tech investors—were genuinely surprised it was possible. In 2018, most people hadn't interacted with AI-generated content at all. The novelty factor was enormous.

That hackathon win gave us the internal credibility to pitch the full product. Sometimes the fastest way to get resources is to build the thing first and ask for permission after.

Why It Worked

1. We Solved a Real Problem

People wanted news during commutes but couldn't read while driving or cycling (this is the Netherlands—lots of cyclists). Audio was the answer, but producing traditional podcasts with human narrators didn't scale. NU.nl publishes 300+ articles per day. You can't hire enough voice actors for that.

AI let us generate audio for 50-80 articles daily—the top stories across every section. Impossible with human narrators at any reasonable budget. The per-article cost went from roughly €50 (professional narration, editing, mastering) to about €0.02 (API calls to TTS services). That's a 2,500x cost reduction.

2. We Set Appropriate Expectations

We never claimed it was human-quality. The product was positioned as "AI-generated summaries"—fast, convenient, good enough for a commute. We even had a small disclaimer on the audio player: "This summary was generated automatically."

Users accepted the trade-off because the value proposition was clear: get the news in 60 seconds without reading. They weren't choosing between AI audio and a beautifully produced NPR segment. They were choosing between AI audio and no audio at all.

This framing matters. We explicitly didn't compete with their existing podcast shows. It was additive—a new format for a new context.

3. We Iterated on Feedback

Early versions were rough. Really rough. The voice was monotone, pacing was off, some pronunciations were hilariously wrong. The TTS engine would occasionally treat abbreviations as words ("NATO" became "nay-toh"), mispronounce Dutch surnames, or pause in the middle of a number.

We built feedback loops into the product:

  • User ratings on each audio summary (thumbs up/down, simple)
  • Skip patterns to identify where listeners dropped off—if 70% of listeners skipped past a certain point, the summary was too long or the content got boring
  • Comments and support tickets, which we read every morning
  • Editorial review of the 10 lowest-rated summaries each week

Each week, we improved based on data. The pronunciation dictionary grew from 50 hardcoded fixes to over 500. We added sentence-level pacing rules. We learned that listeners preferred summaries between 45-90 seconds—shorter felt incomplete, longer lost attention.

After three months, daily listeners grew from a few hundred early adopters to 90,000. The growth was mostly organic—people discovered it, liked it, and came back. We didn't spend money on marketing the feature.

What I Got Wrong

Overestimating Technology, Underestimating Content

The AI could generate audio, but choosing what to narrate mattered more. We spent the first month obsessing over voice quality—testing different TTS providers, tweaking pronunciation, adjusting speech rate. Meanwhile, we were narrating articles that nobody wanted to listen to.

When we finally looked at the data, the correlation was clear: listener satisfaction had almost nothing to do with audio quality and almost everything to do with content selection. A robotic voice reading a fascinating story got thumbs up. A slightly-better-sounding voice reading a boring summary got skipped.

We shifted 80% of our effort to content curation—which articles to narrate, how to prioritize breaking news over evergreen content, how to sequence stories in the daily podcast so it felt like a curated briefing, not a random playlist.

Thinking Technology Alone Was the Product

Users didn't care about AI. They still don't. They cared about getting news efficiently. The technology was a means, not an end.

This lesson keeps proving itself. At Picsart today, our best-performing features don't lead with "AI-powered." The background remover page says "Remove background in seconds." The AI image generator says "Create images from text." Nobody cares about the model behind it. They care about the result in front of them.

Not Building for Mobile First

Our initial audio player was desktop-optimized—a small widget embedded in the article page. But people listened during commutes. On phones. Often on spotty mobile connections.

The first mobile version had a 4-second load time for the player alone. The play button was small. There was no offline mode. We lost early adopters to poor UX and had to rebuild the entire player experience. By the time we shipped the mobile-optimized version, we'd already frustrated a few thousand potential daily users who never came back.

What Being Early Taught Me About AI Adoption

Having shipped AI in 2018 and now working with AI daily in 2026, I've noticed patterns that repeat regardless of the technology generation:

1. AI is Infrastructure, Not Product

The best AI products don't lead with "AI-powered." They lead with the outcome. "Remove your background in seconds" beats "AI-powered background removal." "Get the news in 60 seconds" beats "AI-generated audio summaries."

This was true in 2018 and it's true now. I use AI in my daily workflow—Claude Code with custom skills, LLMs for SEO optimization, AI for generating landing pages at scale—but the output is what matters, not the mechanism.

2. Start with 10x Improvement, Not 10%

AI products need to be dramatically better than alternatives to overcome adoption friction. The audio news product worked because there was no alternative—it was infinite audio vs. zero audio. That's not a 10% improvement. It's a category creation.

At Picsart, the same principle applies. Our AI background remover doesn't just remove backgrounds 10% faster than Photoshop. It turns a 5-minute manual process into a 3-second automatic one. That delta is what drives 150M+ users.

3. Human-in-the-Loop Is Underrated

Pure automation is tempting but often worse than human-AI collaboration. At NU.nl, the best version of the product had an editorial team selecting which articles to narrate and occasionally tweaking summaries. Pure automation produced mediocre results with occasional embarrassments (like the time it narrated an obituary in an inappropriately upbeat tone).

I see this pattern everywhere now. My programmatic SEO system generates page content with AI, but a human reviews every batch before publishing. The AI does 80%, the human refines 20%. The result is better than either could produce alone.

4. Technical Capability Does Not Equal Product-Market Fit

Just because you can build something doesn't mean you should. The graveyard of AI startups is full of impressive technology solving problems nobody has.

We could have built real-time audio for every article on NU.nl—all 300+ per day. We deliberately didn't, because nobody wants to listen to 300 audio summaries. The curation was the product. The AI was just the engine.

I've carried this lesson into every project since. When I'm building side projects or tools for specific audiences, the first question isn't "what can AI do?" It's "what problem actually needs solving?"

5. Timing Matters More Than You Think

In 2018, AI audio was novel enough to get press coverage and internal excitement, but not good enough for mainstream adoption beyond news junkies. The TTS quality was acceptable, not delightful.

If we'd launched the same concept in 2023 with ElevenLabs or modern TTS, the voice quality would've been indistinguishable from human. But we'd face 50 competitors and sky-high user expectations.

Being early gave us room to learn without competition. Being early also meant our technology ceiling was lower. There's no universally right time—but understanding where you are on the curve changes your strategy. Early means you optimize for learning and positioning. Late means you optimize for execution and differentiation.

From NU.nl to Now

The thread connecting my 2018 AI work to today is surprisingly direct. The core skills transfer:

Content curation at scale → At Picsart, I now work on programmatic SEO that generates hundreds of optimized pages. The same lesson applies: the AI can generate anything, but choosing what to generate is the real product decision.

User feedback loops → Session recordings, A/B tests, and conversion data drive every landing page optimization we run. Same muscle, different context.

Building with small teams → Five people built the NU.nl audio product. My current growth team follows the same small team philosophy—tight scope, high autonomy, bias toward shipping.

Appropriate expectations → I still position AI outputs honestly. No "magical" promises. Just clear value delivered fast.

What's Different Now

The AI landscape in 2026 vs 2018:

Aspect20182026
Model qualityLimited, task-specificGeneral-purpose, near-human quality
User expectationsLow, forgivingHigh, demanding
CompetitionMinimalIntense
InfrastructureBuild everythingAPIs, MCPs, and platforms everywhere
HypeNiche interestMainstream, bordering on fatigue
My approachBuild the model integrationBuild skills and systems on top of models

The biggest shift: in 2018, the bottleneck was technology. Can the AI do this at all? In 2026, the bottleneck is taste. The AI can do almost anything—but should it? And how do you make the output actually good?

That's the question I find most interesting now. Not "can AI write a landing page?" but "can AI write a landing page that converts at 65%?" Not "can AI generate a design?" but "can AI generate a design that doesn't look like everything else?"

The Constant

Despite all changes, one thing remains true: users don't care about your technology. They care about their problems.

Solve real problems. Deliver clear value. Let the technology be invisible.

I learned that in 2018 watching commuters listen to robot-voiced news summaries and rate them with a thumbs up. It's the same lesson in 2026, just with better robots.


Building AI products? I've learned a lot from mistakes. Happy to share more—reach out on LinkedIn.

Niels Kaspers

Written by Niels Kaspers

Principal PM, Growth at Picsart

More articles

Get in touch

Have questions or want to discuss this topic? Let me know.