been everywhere lately. Some people claim it will enable a single person to build a billion-dollar company.
I was skeptical.
Instead of debating the hype, I decided to run a small experiment: could I build a real product using vibe coding in a single weekend?
The result was PodClip, a web app that lets me capture and organize clips from podcast episodes on Spotify.
In about five hours of work, Replit generated most of the application—from the front-end interface to the database and authentication. In fact, I probably spent more time organizing my thoughts and writing this article than I did building the app.
Here’s what happened.
The Problem
I’m always listening to podcasts and constantly learning something new and useful. I often come across a phrase or an explanation that resonates with me. It could be a new idiom, a perfectly explained concept, or the answer to a question that’s been bugging me. This happens so frequently, but I often can’t remember the exact words or which episode I heard it. I want to revisit these clips, but searching through listening history is time-consuming. I need a pain-free way to store and organize podcast clips so I can revisit my favorite moments. That’s the inspiration behind PodClip, an app to save all your favorite podcast clips.
Goal
I envisioned an app that would integrate with Spotify, my preferred podcast streaming platform. Ideally, I wanted a wrapper so I could use this extra feature while listening to the Spotify app. I wanted a start/stop button to easily capture the clip while listening. Then, the app would need to store and organize my clips on a dashboard. The dashboard would need a search feature so I could easily find old clips. To enable the search feature, the app would need to transcribe the clips.
Here were the big-picture requirements for my app:
- Connect to Spotify account, my preferred streaming platform
- Add a Start/Stop button to capture clips
- Store clip timestamps and transcripts
- Organize clips in searchable dashboard
Why Replit?
I heard the same platforms mentioned in talks about vibe coding: Cursor, Windsurf, Lovable, and Replit. From my limited research, they all seemed interchangeable. To be honest, I decided to try Replit first because one of the founders helped create React.
Like other vibe coding platforms, Replit requires a subscription. I have the Replit Core subscription which costs $20 per month.
I’m not affiliated with Replit.
Building
To prepare, I listened to a Y combinator podcast about the best tips and tricks for vibe coding. To familiarize myself with the Replit IDE and toolset, I watched the official building your first app and tips and tricks videos. The tips and tricks video showed me exactly how to integrate my Spotify account using the Replit Connectors feature.
Next, it was time to learn by doing. I started small. The initial prompt was:
Build an app that lets me bookmark clips of my favorite podcasts in Spotify
Minutes later, I was amazed by the preview of a sleek web app styled just like Spotify.
Add Clip Feature
The first iteration of the app centered around the Add Clip feature. Users could search for a podcast episode and then input the time stamps for the clip.

The initial prompt took care of the big tasks. It formatted the frontend to match Spotify’s style. On the backend, it connected to my Spotify account and set up the database schema. Replit even created and executed tests.
All episode metadata shown in PodClip—such as show name, episode title, timestamps, and artwork—is pulled directly from Spotify’s official API, in line with their developer guidelines.
Despite this strong start, manually inputting timestamps was not the user experience I had in mind. Going forward, I would have to be more specific with the agent.
Now Playing Feature
For my next prompt, I explained how I wanted to add to a clip while listening to a podcast:
I want to add clips to PodClip while i am listening to a podcast on spotify. I want to click a button to start the clip and then click a button to mark the end. Is there a way create a plug in or add-on that would open within the Spotify app? Other ideas to accomplish this?
Instead of using Build mode, I used Plan mode. This way the agent would explain the process and break it into tasks instead of automatically tinkering with the code. I switched to Plan mode because I didn’t know if my request was possible, and wanted to make sure it was viable before the agent spent time and computing credits.
The agent informed me that plugins or extensions wouldn’t work since Spotify doesn’t allow third-party add-ons. However, there were alternatives:
- A companion “Now Playing” widget in PodClip itself. The user would have to listen to Spotify on another browser tab. Using Spotify’s API, PodClip would pull-in info about the current episode and the timestamp. The user could hit a Start/Stop button in PodClip and the app would capture all the details like show, episode, timestamps, and transcripts.
- A browser bookmarklet or keyboard shortcut. The user would click the bookmarklet to record the start and stop timestamps. Then, it would send this info to PodClip, but the user would still need to input episode info. Although very quick to implement, this approach was far from the seamless user experience I envisioned.
- Mobile-friendly quick-capture page. This approach works just like the widget except it more optimized for a phone.
I decided the Widget option would be best. I toggled back to Build mode and let the agent go to work.
Challenges with Spotify API
After the agent finished, there was an issue with the Spotify connection. PodClip was unable to call the playback API because Replit’s Spotify connector is in development mode. As a result, Replit can’t access the playback API meaning it can’t load in info about the episode the user is listening to.
The agent recommended a workaround. It created manual mode. The user can search for an episode, then use a built-in timer to mark clip boundaries while listening to Spotify in a separate browser tab. It’s a way to capture the clip without needing the playback API.
While sufficient, manual mode isn’t as user friendly as I hoped. It requires the user to sync the PodClip timer with Spotify episode, which is a hassle. However, I appreciated that the agent implemented a workaround as a stopgap. Once Replit has access to the playback API, the code already exists to pull in current episode info. This live mode feature is untested, but I like that it’s there. The app defaults to live mode. It tries to call the Spotify playback API and if successful, pulls in the current episode, timestamp, and playback controls. If unsuccessful, PodClip uses manual mode.
Transcript
Lastly, I wanted a way to capture clips using episode transcripts. This way, I could search the transcript for the clip and highlight it.
At this point in my vibe coding journey, I learned the best results happen when I break down my vision into steps. Here’s the prompt I used:
I want to add a feature where I:
1. Select an episode
2. PodClip generates the transcript of the episode
3. I can search the transcript
4. I can highlight text in the transcript and save it to Your Clips
The agent warned me that generating episode transcripts would be very resource intensive. AI transcription charges per minute of audio. The agent recommended the following approach:
- New “Episode Transcript” page — after selecting an episode, you’d see a “Generate Full Transcript” button. Clicking it kicks off transcription in the background.
- Chunked transcription — the server processes externally available episode audio, splits it into smaller segments (a few minutes each), and transcribes each chunk. The AI returns text with timestamps, so we know exactly where each sentence falls in the episode.
- Transcript viewer — once ready, the full transcript displays with timestamps in the margin. A search bar at the top lets you filter/highlight matching text.
- Highlight to clip — select any text in the transcript, and a “Save Clip” button appears. Since we know the timestamps for each segment, PodClip automatically sets the start/end time based on what you highlighted.
One important limitation I ran into is that Spotify does not provide access to raw podcast audio through its API. While Spotify streams many of the same podcasts available elsewhere, it delivers audio through its own protected infrastructure and does not allow third-party apps to download or process that audio directly.
Because of this, PodClip does not download or transcribe audio from Spotify. Instead, it relies on publicly available podcast RSS feeds (such as those indexed by Apple Podcasts), where audio files are intentionally distributed for open access. In the RSS model, podcast creators host their audio on external platforms, and the files are meant to be directly downloaded by podcast players.
This approach allows PodClip to support transcription features while respecting platform boundaries and adhering to Spotify’s developer guidelines.
To handle the transcription, I needed to integrate my OpenAI account using the Replit connectors.
TRANSCRIPT PAGE — HOW DATA FLOWS
─────────────────────────────────────────────────────────────
USER SEARCHES FOR EPISODE
│
▼
PodClip Server
│
▼
iTunes Search API
│
▼
full episode audio URL
(public MP3/AAC on podcast CDN)
─────────────────────────────────────────────────────────────
USER CLICKS "GENERATE FULL TRANSCRIPT"
│
▼
PodClip Server
/api/episode-transcripts
│
▼
ffmpeg downloads full episode audio
from podcast CDN (e.g. traffic.libsyn.com)
│
▼
ffmpeg splits audio into 2-minute chunks
│
├──► chunk 1 ──► OpenAI speech-to-text ──► text
├──► chunk 2 ──► OpenAI speech-to-text ──► text
├──► chunk 3 ──► OpenAI speech-to-text ──► text
└──► ...
│
▼
segments stitched together with timestamps
stored in PostgreSQL
│
▼
frontend polls every 3s ──► shows progress bar
until complete ──► displays full transcript


Timestamps of transcript chunk in margin. Image by author.
The app transcribes in two minute chunks. As a result, the timestamp of the highlight clip isn’t very precise. However, I care more about the content than the exact timestamp.
End Product & Publishing App
In the end, I had a working app to store all my favorite podcast quotes. Replit makes it easy to publish the app for other users. It handled the login authentication so users can create their own PodClip account. Replit also added a Feedback button.
Here’s a link to the published app: PodClip app. Please don’t be shy about using the Feedback button! I’m very curious to know what could be improved.
Here’s a link to the GitHub repo: PodClip repo

I didn’t keep track of exactly how many hours I spent on this project since I could prompt the agent and then step away while it worked. I estimate I spent about 3 to 5 hours total over the course of a weekend. The most time consuming parts were prompting the agent and testing out the features for myself.
Future Work
Overall, I consider this app a success. I know I’m going to use it, and it’s much better than my previous system of storing podcast quotes in the Notes app. Nevertheless, there is still room for improvement.
Let’s see how the final product compares to the requirements I listed at the outset:
| App Requirement / Goal | Result | Effort | Notes |
|---|---|---|---|
| 🎧 Connect to Spotify account | ✅ Complete | 🟢 Easy | OAuth authentication worked smoothly with minimal friction. |
| ⏺️ Add a Start/Stop button to capture clips | ⚠️ Workaround Needed | 🔴 Hard | Required a workaround. Depends on accessing Spotify playback API |
| 📝 Store clip timestamps and transcripts | ✅ Complete | 🟢 Easy | Data storage and retrieval worked reliably. |
| 🔎 Organize clips in a searchable dashboard | ✅ Complete | 🟢 Easy | Dashboard UI and search functionality implemented successfully. |
The biggest remaining task is pulling in current episode info and playback for live mode in the Now Playing feature. While the code for live mode already exists in the app, it still requires testing. When (or if) that will happen depends on when the Spotify allows access to the playback API.
What I Learned About Vibe Coding
My biggest surprises about vibe coding:
- The agent handled most of the application architecture automatically
- Short and simple prompts were more than adequate
- Platform limitations (Spotify API) were the biggest blocker
The process of reminded me of looking at the answer key before wholeheartedly trying to work the problem set. In other words, vibe coding is great for prototyping but doesn’t necessarily build coding skill. And that’s OK. The goal of the project to rapidly prototype a MVP, which is exactly what this accomplished.
For developers, vibe coding may feel like skipping steps in the learning process. But for experimentation and rapid prototyping, it dramatically lowers the barrier to turning an idea into a working product.
To anyone else who wants to start vibe coding my advice is find a problem and dive in. Choose a problem you genuinely care to keep you motivated as you learn how to use the IDE and how to best prompt the agent. Initially, the vibe coding learning curve seemed steep with plenty of opinions on the various platforms and best practices. I incorrectly thought I needed to know more before beginning. I didn’t. I wish I didn’t let all that chatter intimidate me. Like most things, vibe coding is easiest to learn by doing.
PodClip won’t turn me into a solopreneur unicorn. However, perhaps one day a PodClip-like feature will be included with Spotify.
Conclusion
Thank you for reading my article. Connect with me on LinkedIn, Twitter, or Instagram.
All feedback is welcome. I am always eager to learn new or better ways of doing things.
Feel free to leave a comment or reach out to me at [email protected].
References