The Definitive Guide to Structured Data Parsing with OpenAI GPT3.5 | by Marie Stephen Leo | Apr, 2024

Editor
1 Min Read


Systematically comparing Instructor, Fructose, and Langchain for three complex real-world structured data parsing tasks.

Image generated by Author using ChatGPT

Parsing structured data from Large Language Models (LLMs) can be frustrating for anything beyond toy problems. Yet, reliably parsing LLM outputs into pre-defined structures is crucial to integrating LLMs into other software systems and generative AI apps. OpenAI has taken the lead by releasing the GPT function calling (Link) and JSON mode (Link). Still, these require intensive prompt engineering, robust parsing, retry, and graceful error handling to work reliably for production real-world problems.

Below are some problems I’ve faced parsing structured data with LLMs. This article was written entirely by a human with help from Grammarly’s grammar checker, which has been my writing method since 2019.

  1. Classification: The LLM must strictly adhere to a list of allowed classes, which can be as many as tens to hundreds in real-world problems. LLMs start hallucinating about disallowed classes in tasks with more than a handful of classes.
  2. Named Entity Recognition (NER): The LLM should only pick entities explicitly present in the text. These entities might be in a 2…
Share this Article
Please enter CoinGecko Free Api Key to get this plugin works.