I Finally Built My First AI App (And It Wasn’t What I Expected)

Editor
20 Min Read


everyone’s talking about AI apps, but no one really shows you what’s happening behind the curtain? Yeah… that was me a few weeks ago — staring at my screen, wondering if I’d ever actually build something that talked back.

So, I decided to just dive in, figure it out, and share everything along the way. By the end of this post, you’ll see exactly what happens when you build your first AI app, and you’ll pick up a few real skills along the way: calling APIs, handling environment variables, and running your first script without breaking anything (hopefully).

Let’s get into it — I promise it’s simpler than it looks.

What Are We Building — and Why It Actually Matters

Okay, so before we start typing code like maniacs, let’s pause for a second and talk about what we’re actually building here. Spoiler: it’s not some sci-fi level AI that will take over your job (yet). It’s something practical, real-world, and totally doable in a single afternoon: an AI-powered article summarizer.

Here’s the idea: you paste a chunk of text — maybe a news article, a research paper, or even a super long blog post — and our little AI app spits out a short, easy-to-read summary. Think of it as your personal TL;DR machine. 

Why this matters:

  • It’s immediately useful: Anyone who reads lots of content (so… basically all of us on TDS) will love having a tool that distills information instantly.
  • It’s simple, but powerful: We’re only making one API call, but the result is a working AI app you can actually show off.
  • It’s expandable: Today, it’s a command-line script. Tomorrow, you could hook it up to Slack, a web interface, or batch-process hundreds of articles.

So yeah, we’re not reinventing the wheel — but we are demystifying what actually happens behind the scenes when you build an AI app. And more importantly, we’re doing it in public, learning as we go, and documenting every little step so that by the time you finish this post, you’ll actually understand what’s happening under the hood.

Next, we’ll get our hands dirty with Python, install the OpenAI package, and set everything up so that our AI can start summarising text. Don’t worry , I’ll explain every single line as we go.

Installing the OpenAI Package (And Making Sure Nothing Breaks)

Alright. This is the part where things usually feel “technical” and slightly intimidating.

But I promise — we’re just installing a package and running a tiny script. That’s it.

First, make sure you have Python installed. If you’re not sure, open your terminal (or Command Prompt on Windows) and run:

python --version

If you see something like Python 3.x.x, you’re good. If not… install Python first and come back.

Now let’s install the OpenAI package. In your terminal:

pip install openai

That command basically tells Python: “Hey, go grab this library from the internet so I can use it in my project.”

If everything goes well, you’ll see a bunch of text scroll by and eventually something like:

Successfully installed openai

That’s your first small win.

Quick Reality Check: What Did We Just Do?

When we ran pip install openai, we didn’t “install AI.” We installed a client library — a helper tool that enables our Python script to communicate with OpenAI’s servers.

Think of it like this:

  • Your computer = the messenger
  • The OpenAI API = the brain in the cloud

The openai package = the language translator between them
Without the package, your script wouldn’t know how to properly format a request to the API.

Let’s Test That It Works

Before we move forward, let’s confirm Python can actually see the package.

Run this:

python

Then inside the Python shell:

import openai
print("It works!")

If you don’t see any angry red error messages, congratulations — your environment is ready.

This may seem small, but this step teaches you something important:

  • How to install external libraries
  • How Python environments work
  • How to verify that your setup is correct

These are foundational skills. Every real-world AI or data project starts exactly like this.

Next, we’ll set up our API key securely using environment variables.

Setting Up Your API Key (Without Accidentally Leaking It)

Okay. This part is important.

To talk to the OpenAI API, we need something called an API key. Think of it as your personal password that says, “Hey, it’s me — I’m allowed to use this service.”

Now here’s the mistake beginners (including past me) make:

They copy the API key and paste it directly into the Python file. Please don’t do that.

If you ever upload that file to GitHub, share it publicly, or even send it to a friend, you’ve basically exposed your secret key to the internet. And yes — people and bots actively scan for that.

So instead, we’re going to store it safely using environment variables.

Step 1: Get Your API Key

  1. Create an account on OpenAI.
  2. Generate an API key from your dashboard.
  3. Copy it somewhere safe (for now).

Don’t worry — we’re not putting it into our code.

Step 2: Set the Environment Variable

On Windows (Command Prompt):

setx OPENAI_API_KEY "your_api_key_here"

On Mac/Linux:

export OPENAI_API_KEY="your_api_key_here"

After running this, close and reopen your terminal so the change takes effect.

What we just did: we created a variable stored in your system that only your machine knows about.

Step 3: Access It in Python

Now let’s confirm Python can see it.

Open Python again:

python

Then type:

import os
api_key = os.getenv("OPENAI_API_KEY")
print(api_key[:4] + "...")

If you see the first few characters of your key, that means everything worked.

And if None shows up? That just means the environment variable didn’t register — usually fixed by restarting your terminal.

What’s Actually Happening Behind the Scenes?

When we use os.getenv("OPENAI_API_KEY"), Python is simply asking your operating system:

“Hey, do you have a variable saved under this name?”

If it exists, it returns the value. If not, it returns None.

This tiny step introduces a huge real-world concept:

  • Secure configuration management
  • Separating secrets from code
  • Writing production-safe scripts

This is how real applications handle credentials. You’re not just building a toy app anymore. You’re following actual engineering best practices.

Next, we’ll finally make our first API call — the moment where your script sends text to the cloud… and something intelligent comes back.

Making Your First API Call (This Is the Magic Moment)

Alright. This is it.

This is the moment where your computer actually talks to the AI.

Up until now, we’ve just been preparing the environment. Installing packages. Setting keys. Doing the “responsible adult” setup work.

Now we finally send a request.

Create a new file called app.py and paste this in:

import os
from openai import OpenAI

client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))

text_to_summarize = """
Artificial intelligence is transforming industries by automating tasks,
improving decision-making, and enabling new products and services.
However, understanding how these systems work behind the scenes
remains a mystery to many beginners.
"""

response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[
{"role": "system", "content": "You are a helpful assistant that summarizes text clearly and concisely."},
{"role": "user", "content": f"Summarize this text:
{text_to_summarize}"}
]
)
print(response.choices[0].message.content)

Now go to your terminal and run:

python app.py

And if everything is set up correctly… you should see a clean summary printed in your terminal.

Pause for a second when that happens. Because what just occurred is kind of wild.

Let’s Break Down What Just Happened

Let’s walk through this slowly.

from openai import OpenAI

This imports the client library we installed earlier. It’s the bridge between your script and the API.

client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))

Here, we create a client object and authenticate using the environment variable we set earlier.

If the key is wrong. The request fails.
If the key is correct. You’re officially connected.

response = client.chat.completions.create(...)

This is the API call.

Your script sends:

  • The model name
  • A list of messages (structured like a conversation)
  • OpenAI’s servers process it.
  • The model generates a response.
  • The server sends structured JSON back to your script.

Then we extract the actual text with:

response.choices[0].message.content

That’s it.

Just a properly formatted HTTP request going to a cloud server and a structured response coming back.

Why This Is a Big Deal

You just learned how to:

  • Authenticate with an external service
  • Send structured data to an API
  • Receive and parse structured output
  • Execute a full AI-powered workflow in under 30 lines of code

This is the foundation of real AI applications.

Next, we’ll dig into what that response object actually looks like — because understanding the structure is what separates copying code from actually knowing what’s going on.

It Worked… After a Small (Very Real) Reality Check

Before we move on, I need to tell you what happened the first time I ran this.

  • The code was correct.
  • The API key was correct.
  • The request structure was correct.

And then I got this:

openai.RateLimitError: 429
'insufficient_quota'

At first glance, that feels scary.

But here’s what it actually meant:

My script successfully connected to the API. The authentication worked. The server received my request.

I just didn’t have billing enabled. That’s it.

Using the API isn’t the same as using ChatGPT in your browser. The API is infrastructure. It runs on cloud resources. And those resources cost money.

So I added a small amount of credits to my account (nothing crazy — just enough to experiment), ran the exact same script again…

And it worked.

Clean summary printed to the terminal. No code changes.

That moment is important. Because now we can categorize beginner API issues into two main buckets:

  • Code problems → Your Python script is invalid.
  • Infrastructure problems → Authentication, quota, or billing issues.

Once you understand that distinction, AI development becomes way less mysterious.

Now… What Does response Actually Look Like?

When your script works, response isn’t just text. It’s a structured object (basically JSON under the hood).

If you temporarily print the whole thing:

print(response)

You’ll see something structured with fields like:

id
model
usage
choices

The actual summary lives inside:

response.choices[0].message.content

Let’s unpack that:

choices → a list of generated outputs
[0] → we’re grabbing the first one
message → the assistant’s reply object
content → the actual text

This matters more than it seems.

Because in real-world applications, you might:

  • Log token usage for cost tracking
  • Store responses in a database
  • Handle multiple choices
  • Add proper error handling

Right now we’re just printing the content.

But structurally, you now understand how to navigate an API response.

And that’s the difference between copying code… and actually knowing what’s going on.

At this point, you’ve:

  • Installed a production-grade client library
  • Secured credentials properly
  • Sent a structured API request
  • Understood how billing and quota affect infrastructure
  • Parsed structured output

That’s a full AI workflow.

Next, we’ll make this slightly more interactive — instead of hardcoding text, we’ll let the user paste in their own article to summarize.

And that’s when it really starts feeling like a real app.

Making It Interactive (Your TL;DR App, Finally!)

Up until now, we’ve been doing everything with a hardcoded chunk of text. That’s fine for testing, but it’s not very… you know… app-like.

We want to actually let a user paste in any article and get a summary.

Let’s fix that.

Step 1: Get User Input

Python makes this super easy with the input() function. Open your app.py and replace your text_to_summarize variable with this:

text_to_summarize = input("Paste your article here:\n")

That’s it. Now, when you run:

python app.py

The terminal will wait for you to paste something in. You hit Enter, and the AI does its thing.

Step 2: Print the Summary Nicely

Instead of dumping raw text, let’s make it a little prettier:

summary = response.choices[0].message.content
print("\nHere’s your summary:\n")
print(summary)

See what we did there?

We store the output in a variable called summary — handy if we want to use it later.

We add a little heading to make it obvious what the AI returned.

This tiny touch makes your app feel more “finished” without actually being fancy.

Step 3: Test It Out

Run the script, paste in a paragraph from any article, and watch the magic happen:

python app.py

You should see your custom summary pop up in seconds.

This is why we started with a simple hardcoded string — now you can actually interact with the model like a real app user.

Step 4: Optional Extras (If You’re Feeling Fancy)

If you want to take it one step further, you can:

  • Loop until the user quits — let them summarize multiple articles without restarting the script.
  • Save summaries to a file — handy for research or blog prep.
  • Handle empty input — make sure the app doesn’t crash if the user accidentally hits Enter.

Polishing the App for Longer Articles

Alright, by now our little AI summarizer works. You paste text, hit Enter, and get a summary. 

But there’s a small problem: what happens if someone pastes a super long article, like a 2,000-word blog post?

If we send that directly to the API, one of two things usually happens:

The model might truncate the input and only summarize part of it.
The request could fail, depending on token limits.

Not ideal. So let’s make our app smarter.

Step 1: Trim and Clean the Input

Even before worrying about length, we should tidy up the text.

Remove unnecessary whitespace, newlines, or invisible characters:

text_to_summarize = text_to_summarize.strip().replace("\n", " ")

strip() removes extra spaces at the start/end
replace("\n", " ") turns line breaks into spaces so the model sees a continuous paragraph

Small step, but it makes summaries cleaner.

Step 2: Chunk Long Text

Let’s say we want to split articles into smaller chunks so the model can handle them comfortably. A simple approach is splitting by sentences or paragraphs. Here’s a quick example:

max_chunk_size = 500 # roughly 500 words
chunks = []
words = text_to_summarize.split()
for i in range(0, len(words), max_chunk_size):
chunk = " ".join(words[i:i+max_chunk_size])
chunks.append(chunk)

Now chunks is a list of manageable text pieces.

We can then loop through each chunk, summarize it, and combine the summaries at the end.

Step 3: Summarize Each Chunk

Here’s how that might look:

final_summary = ""
for chunk in chunks:
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[
{"role": "system", "content": "You are a helpful assistant that summarizes text clearly and concisely."},
{"role": "user", "content": f"Summarize this text:\n{chunk}"}
]
)
final_summary += response.choices[0].message.content + " "

Notice how small the change is? But now, even super long articles can be summarized without breaking the app.

Step 4: Present a Clean Output

Finally, let’s make the result easy to read:

print("\nHere’s your final summary:\n")
print(final_summary.strip())

.strip() at the end ensures no extra spaces or trailing newlines.

The user sees one clean, continuous summary instead of multiple disjointed outputs.

From Idea to Real AI App

When I started this, it was just a simple idea:

“What if I could paste an article and instantly get a clean summary?”

That’s pretty much it. No big startup vision and complex architecture.

And step by step, here’s what happened:

  • I installed a real production library.
  • I learned how APIs actually work.
  • I handled billing errors and environment variables.
  • I built a working CLI tool.
  • Then I turned it into a web app anyone can use.

Somewhere along the way, this stopped feeling like a “toy script.”
It became a real AI workflow:

Local machine → API call → cloud model → structured response → user interface.

And the best part? I understand every piece of it now.

The errors and warnings also helped. Because building in public forces you to slow down, debug properly, and actually learn what’s happening.

This is how real AI skills are built. Not by memorizing code. But by shipping small things, breaking them, fixing them, and understanding them.

So if this helped you, don’t stop here.

Break it. Improve it.

Add file uploads. Deploy it. Turn it into a Chrome extension. Build the version you wish existed.

And if you do — write about it.

Because the fastest way to grow in AI right now isn’t consuming content.

It’s building in public.

And today, we shipped!

I also deployed the app so you can try it yourself here

If you enjoyed this article. Let me know. Would love your comments and feedback.

Medium

LinkedIn

Twitter

YouTube

Share this Article
Please enter CoinGecko Free Api Key to get this plugin works.