Transforming Data into Solutions: Building a Smart App with Python and AI | by Vianney Mixtur | Jan, 2025

Editor
5 Min Read


In this section, I’ll share some implementation details of Baker. Again it is open-source so I invite my technical readers to go check the code on GitHub. Some readers might want to jump to the next section.

The application is minimalist with a simple 3 tier architecture and is built almost entirely in Python.

An architecture diagram of the Baker application
Photo by author

It is made of the following components:

  1. Frontend: A Streamlit interface provides an intuitive platform for users to interact with the system, query recipes, and receive recommendations.
  2. Backend: Built with FastAPI, the backend serves as the interface for handling user queries and delivering recommendations.
  3. Engine: The engine contains the core logic for finding and filtering recipes, leveraging monggregate as a query builder.
  4. Database: The recipes are stored in a MongoDB database that processes the aggregation pipelines generated by the engine.

Backend Setup

The backend is initialized in app.py, where FastAPI endpoints are defined. For instance:

from fastapi import FastAPI
from baker.engine.core import find_recipes
from baker.models.ingredient import Ingredient

app = FastAPI()
@app.get("/")
def welcome():
return {"message": "Welcome to the Baker API!"}
@app.post("/recipes")
def _find_recipes(ingredients: list[Ingredient], serving_size: int = 1) -> list[dict]:
return find_recipes(ingredients, serving_size)

The /recipes endpoint accepts a list of ingredients and a serving size then delegates the processing to the engine.

Recipe Engine Logic

The heart of the application resides in core.py within the engine directory. It manages database connections and query pipelines. Below is an example of the find_recipes function:

# Imports and the get_recipes_collection function are not included

def find_recipes(ingredients, serving_size=1):
# Get the recipes collection
recipes = get_recipes_collection()

# Create the pipeline
pipeline = Pipeline()
pipeline = include_normalization_steps(pipeline, serving_size)
query = generate_match_query(ingredients, serving_size)
print(query)
pipeline.match(query=query).project(
include=[
"id",
"title",
"preparation_time",
"cooking_time",
"original_serving_size",
"serving_size",
"ingredients",
"steps",
],
exclude="_id",
)

# Find the recipes
result = recipes.aggregate(pipeline.export()).to_list(length=None)

return result

def generate_match_query(ingredients: list[Ingredient], serving_size: int = 1) -> dict:
"""Generate the match query."""

operands = []
for ingredient in ingredients:
operand = {
"ingredients.name": ingredient.name,
"ingredients.unit": ingredient.unit,
"ingredients.quantity": {"$gte": ingredient.quantity / serving_size},
}
operands.append(operand)

query = {"$and": operands}

return query

def include_normalization_steps(pipeline: Pipeline, serving_size: int = 1):
"""Adds steps in a pipeline to normalize the ingredients quantity in the db

The steps below normalize the quantities of the ingredients in the recipes in the DB by the recipe serving size.

"""

# Unwind the ingredients
pipeline.unwind(path="$ingredients")

pipeline.add_fields({"original_serving_size": "$serving_size"})
# Add the normalized quantity
pipeline.add_fields(
{
# "orignal_serving_size": "$serving_size",
"serving_size": serving_size,
"ingredients.quantity": S.multiply(
S.field("ingredients.quantity"),
S.divide(serving_size, S.max([S.field("serving_size"), 1])),
),
}
)

# Group the results
pipeline.group(
by="_id",
query={
"id": {"$first": "$id"},
"title": {"$first": "$title"},
"original_serving_size": {"$first": "$original_serving_size"},
"serving_size": {"$first": "$serving_size"},
"preparation_time": {"$first": "$preparation_time"},
"cooking_time": {"$first": "$cooking_time"},
# "directions_source_text": {"$first": "$directions_source_text"},
"ingredients": {"$addToSet": "$ingredients"},
"steps": {"$first": "$steps"},
},
)
return pipeline

The core logic of Baker resides in the find_recipes function.

This function creates a MongoDB aggregation pipeline thanks to monggregate. This aggregation pipeline includes several steps.

The first steps are generated by the include_normalization_steps function that is going to dynamically update the quantities of the ingredients in the database to ensure we are comparing apples to apples. This is done by updating the ingredients quantities in the database to the user desired serving.

Then the actual matching logic is created by the generate_match_query function. Here we ensure, that the recipes don’t require more than what the user have for the ingredients concerned.

Finally a projection filters out the fields that we don’t need to return.

Baker helps you discover a better fate for your ingredients by finding recipes that match what you already have at home.

The app features a simple form-based interface. Enter the ingredients you have, specify their quantities, and select the unit of measurement from the available options.

A screenshot showing the Baker interface to enter ingredients
Photo by author

In the example above, I’m searching for a recipe for two servings to use up 4 tomatoes and 2 carrots that have been sitting in my kitchen for a bit too long.

Baker found two recipes! Clicking on a recipe lets you view the full details.

A screenshot of the Baker interface showing how recipes are displayed
Photo by author

Baker adapts the quantities in the recipe to match the serving size you’ve set. For example, if you adjust the serving size from two to four people, the app recalculates the ingredient quantities accordingly.

Updating the serving size may also change the recipes that appear. Baker ensures that the suggested recipes match not only the serving size but also the ingredients and quantities you have on hand. For instance, if you only have 4 tomatoes and 2 carrots for two people, Baker will avoid recommending recipes that require 4 tomatoes and 4 carrots.

Share this Article
Please enter CoinGecko Free Api Key to get this plugin works.