• About Us
  • Privacy Policy
  • Disclaimer
  • Contact Us
AimactGrow
  • Home
  • Technology
  • AI
  • SEO
  • Coding
  • Gaming
  • Cybersecurity
  • Digital marketing
No Result
View All Result
  • Home
  • Technology
  • AI
  • SEO
  • Coding
  • Gaming
  • Cybersecurity
  • Digital marketing
No Result
View All Result
AimactGrow
No Result
View All Result

From Textual content to Tables: Characteristic Engineering with LLMs for Tabular Knowledge

Admin by Admin
March 22, 2026
Home AI
Share on FacebookShare on Twitter


On this article, you’ll learn to use a pre-trained massive language mannequin to extract structured options from textual content and mix them with numeric columns to coach a supervised classifier.

Subjects we’ll cowl embody:

  • Making a toy dataset with combined textual content and numeric fields for classification
  • Utilizing a Groq-hosted LLaMA mannequin to extract JSON options from ticket textual content with a Pydantic schema
  • Coaching and evaluating a scikit-learn classifier on the engineered tabular dataset

Let’s not waste any extra time.

From Text to Tables: Feature Engineering with LLMs for Tabular Data

From Textual content to Tables: Characteristic Engineering with LLMs for Tabular Knowledge
Picture by Editor

Introduction

Whereas massive language fashions (LLMs) are usually used for conversational functions in use instances that revolve round pure language interactions, they’ll additionally help with duties like function engineering on advanced datasets. Particularly, you possibly can leverage pre-trained LLMs from suppliers like Groq (for instance, fashions from the Llama household) to undertake knowledge transformation and preprocessing duties, together with turning unstructured knowledge like textual content into absolutely structured, tabular knowledge that can be utilized to gasoline predictive machine studying fashions.

On this article, I’ll information you thru the total technique of making use of function engineering to structured textual content, turning it into tabular knowledge appropriate for a machine studying mannequin — particularly, a classifier skilled on options created from textual content by utilizing an LLM.

Setup and Imports

First, we’ll make all the mandatory imports for this sensible instance:

import pandas as pd

import json

from pydantic import BaseModel, Area

from openai import OpenAI

from google.colab import userdata

from sklearn.ensemble import RandomForestClassifier

from sklearn.model_selection import train_test_split

from sklearn.metrics import classification_report

from sklearn.preprocessing import StandardScaler

Notice that in addition to frequent libraries for machine studying and knowledge preprocessing like scikit-learn, we import the OpenAI class — not as a result of we’ll straight use an OpenAI mannequin, however as a result of many LLM APIs (together with Groq’s) have adopted the identical interface type and specs as OpenAI. This class subsequently helps you work together with quite a lot of suppliers and entry a variety of LLMs via a single shopper, together with Llama fashions by way of Groq, as we’ll see shortly.

Subsequent, we arrange a Groq shopper to allow entry to a pre-trained LLM that we will name by way of API for inference throughout execution:

groq_api_key = userdata.get(‘GROQ_API_KEY’)

shopper = OpenAI(

    base_url=“https://api.groq.com/openai/v1”,

    api_key=groq_api_key

)

Necessary notice: for the above code to work, you have to outline an API secret key for Groq. In Google Colab, you are able to do this via the “Secrets and techniques” icon on the left-hand aspect bar (this icon seems like a key). Right here, give your key the title 'GROQ_API_KEY', then register on the Groq web site to get an precise key, and paste it into the worth discipline.

Making a Toy Ticket Dataset

The subsequent step generates an artificial, partly random toy dataset for illustrative functions. If in case you have your personal textual content dataset, be happy to adapt the code accordingly and use your personal.

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

import random

import time

 

random.seed(42)

classes = [“access”, “inquiry”, “software”, “billing”, “hardware”]

 

templates = {

    “entry”: [

        “I’ve been locked out of my account for {days} days and need urgent help!”,

        “I can’t log in, it keeps saying bad password.”,

        “Reset my access credentials immediately.”,

        “My 2FA isn’t working, please help me get into my account.”

    ],

    “inquiry”: [

        “When will my new credit card arrive in the mail?”,

        “Just checking on the status of my recent order.”,

        “What are your business hours on weekends?”,

        “Can I upgrade my current plan to the premium tier?”

    ],

    “software program”: [

        “The app keeps crashing every time I try to view my transaction history.”,

        “Software bug: the submit button is greyed out.”,

        “Pages are loading incredibly slowly since the last update.”,

        “I’m getting a 500 Internal Server Error on the dashboard.”

    ],

    “billing”: [

        “I need a refund for the extra charges on my bill.”,

        “Why was I billed twice this month?”,

        “Please update my payment method, the old card expired.”,

        “I didn’t authorize this $49.99 transaction.”

    ],

    “{hardware}”: [

        “My hardware token is broken, I can’t log in.”,

        “The screen on my physical device is cracked.”,

        “The card reader isn’t scanning properly anymore.”,

        “Battery drains in 10 minutes, I need a replacement unit.”

    ]

}

 

knowledge = []

for _ in vary(100):

    cat = random.selection(classes)

    # Injecting a random variety of days into particular templates to foster selection

    textual content = random.selection(templates[cat]).format(days=random.randint(1, 14))

    

    knowledge.append({

        “textual content”: textual content,

        “account_age_days”: random.randint(1, 2000),

        “prior_tickets”: random.selections([0, 1, 2, 3, 4, 5], weights=[40, 30, 15, 10, 3, 2])[0],

        “label”: cat

    })

 

df = pd.DataFrame(knowledge)

The dataset generated accommodates buyer help tickets, combining textual content descriptions with structured numeric options like account age and variety of prior tickets, in addition to a category label spanning a number of ticket classes. These labels will later be used for coaching and evaluating a classification mannequin on the finish of the method.

Extracting LLM Options

Subsequent, we outline the specified tabular options we need to extract from the textual content. The selection of options is domain-dependent and absolutely customizable, however you’ll use the LLM afterward to extract these fields in a constant, structured format:

class TicketFeatures(BaseModel):

    urgency_score: int = Area(description=“Urgency of the ticket on a scale of 1 to five”)

    is_frustrated: int = Area(description=“1 if the consumer expresses frustration, 0 in any other case”)

For instance, urgency and frustration usually correlate with particular ticket sorts (e.g. entry lockouts and outages are typically extra pressing and emotionally charged than basic inquiries), so these alerts may help a downstream classifier separate classes extra successfully than uncooked textual content alone.

The subsequent operate is a key component of the method, because it encapsulates the LLM integration wanted to rework a ticket’s textual content right into a JSON object that matches our schema.

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

def extract_features(textual content: str) -> dict:

    # Sleep for two.5 seconds for safer use below the constraints of the 30 RPM free-tier restrict

    time.sleep(2.5)

    

    schema_instructions = json.dumps(TicketFeatures.model_json_schema())

    response = shopper.chat.completions.create(

        mannequin=“llama-3.3-70b-versatile”,

        messages=[

            {

                “role”: “system”,

                “content”: f“You are an extraction assistant. Output ONLY valid JSON matching this schema: {schema_instructions}”

            },

            {“role”: “user”, “content”: text}

        ],

        response_format={“kind”: “json_object”},

        temperature=0.0

    )

    return json.masses(response.selections[0].message.content material)

Why does the operate return JSON objects? First, JSON is a dependable solution to ask an LLM to provide structured outputs. Second, JSON objects might be simply transformed into Pandas Sequence objects, which may then be seamlessly merged with different columns of an present DataFrame to grow to be new ones. The next directions do the trick and append the brand new options, saved in engineered_features, to the remainder of the unique dataset:

print(“1. Extracting structured options from textual content utilizing LLM…”)

engineered_features = df[“text”].apply(extract_features)

features_df = pd.DataFrame(engineered_features.tolist())

 

X_raw = pd.concat([df.drop(columns=[“text”, “label”]), features_df], axis=1)

y = df[“label”]

 

print(“n2. Last Engineered Tabular Dataset:”)

print(X_raw)

Here’s what the ensuing tabular knowledge seems like:

          account_age_days  prior_tickets  urgency_score  is_pissed off

0                564              0              5              1

1               1517              3              4              0

2                 62              0              5              1

3                408              2              4              0

4                920              1              5              1

..               ...            ...            ...            ...

95                91              2              4              1

96               884              0              4              1

97              1737              0              5              1

98               837              0              5              1

99               862              1              4              1

 

[100 rows x 4 columns]

Sensible notice on price and latency: Calling an LLM as soon as per row can grow to be gradual and costly on bigger datasets. In manufacturing, you’ll normally need to (1) batch requests (course of many tickets per name, in case your supplier and immediate design enable it), (2) cache outcomes keyed by a steady identifier (or a hash of the ticket textual content) so re-runs don’t re-bill the identical examples, and (3) implement retries with backoff to deal with transient charge limits and community errors. These three practices usually make the pipeline quicker, cheaper, and way more dependable.

Coaching and Evaluating the Mannequin

Lastly, right here comes the machine studying pipeline, the place the up to date, absolutely tabular dataset is scaled, cut up into coaching and take a look at subsets, and used to coach and consider a random forest classifier.

print(“n3. Scaling and Coaching Random Forest…”)

scaler = StandardScaler()

X_scaled = scaler.fit_transform(X_raw)

 

# Cut up the info into coaching and take a look at

X_train, X_test, y_train, y_test = train_test_split(X_scaled, y, test_size=0.4, random_state=42)

 

# Prepare a random forest classification mannequin

clf = RandomForestClassifier(random_state=42)

clf.match(X_train, y_train)

 

# Predict and Consider

y_pred = clf.predict(X_test)

print(“n4. Classification Report:”)

print(classification_report(y_test, y_pred, zero_division=0))

Listed here are the classifier outcomes:

Classification Report:

              precision    recall  f1–rating   help

 

      entry       0.22      0.18      0.20        11

     billing       0.29      0.33      0.31         6

    {hardware}       0.29      0.25      0.27         8

     inquiry       1.00      1.00      1.00         8

    software program       0.44      0.57      0.50         7

 

    accuracy                           0.45        40

   macro avg       0.45      0.47      0.45        40

weighted avg       0.44      0.45      0.44        40

For those who used the code for producing an artificial toy dataset, you could get a moderately disappointing classifier outcome by way of accuracy, precision, recall, and so forth. That is regular: for the sake of effectivity and ease, we used a small, partly random set of 100 situations — which is often too small (and arguably too random) to carry out nicely. The important thing right here is the method of turning uncooked textual content into significant options via using a pre-trained LLM by way of API, which ought to work reliably.

Abstract

This text takes a delicate tour via the method of turning uncooked textual content into absolutely tabular options for downstream machine studying modeling. The important thing trick proven alongside the way in which is utilizing a pre-trained LLM to carry out inference and return structured outputs by way of efficient prompting.

Tags: DataEngineeringfeatureLLMstablesTabulartext
Admin

Admin

Next Post
Verizon Could Lower Off Your 5G Dwelling Web If You Break This Rule

Verizon Could Lower Off Your 5G Dwelling Web If You Break This Rule

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recommended.

AI-Prepared Molecular Dataset Revolutionizes Analysis

AI-Prepared Molecular Dataset Revolutionizes Analysis

January 29, 2026
Measuring Progress In the direction of AGI: A Cognitive Framework

Measuring Progress In the direction of AGI: A Cognitive Framework

March 19, 2026

Trending.

Moonshot AI Releases 𝑨𝒕𝒕𝒆𝒏𝒕𝒊𝒐𝒏 𝑹𝒆𝒔𝒊𝒅𝒖𝒂𝒍𝒔 to Exchange Mounted Residual Mixing with Depth-Sensible Consideration for Higher Scaling in Transformers

Moonshot AI Releases 𝑨𝒕𝒕𝒆𝒏𝒕𝒊𝒐𝒏 𝑹𝒆𝒔𝒊𝒅𝒖𝒂𝒍𝒔 to Exchange Mounted Residual Mixing with Depth-Sensible Consideration for Higher Scaling in Transformers

March 16, 2026
AI-Assisted Menace Actor Compromises 600+ FortiGate Gadgets in 55 Nations

AI-Assisted Menace Actor Compromises 600+ FortiGate Gadgets in 55 Nations

February 23, 2026
Exporting a Material Simulation from Blender to an Interactive Three.js Scene

Exporting a Material Simulation from Blender to an Interactive Three.js Scene

August 20, 2025
10 tricks to begin getting ready! • Yoast

10 tricks to begin getting ready! • Yoast

July 21, 2025
Introducing Sophos Endpoint for Legacy Platforms – Sophos Information

Introducing Sophos Endpoint for Legacy Platforms – Sophos Information

August 28, 2025

AimactGrow

Welcome to AimactGrow, your ultimate source for all things technology! Our mission is to provide insightful, up-to-date content on the latest advancements in technology, coding, gaming, digital marketing, SEO, cybersecurity, and artificial intelligence (AI).

Categories

  • AI
  • Coding
  • Cybersecurity
  • Digital marketing
  • Gaming
  • SEO
  • Technology

Recent News

Verizon Could Lower Off Your 5G Dwelling Web If You Break This Rule

Verizon Could Lower Off Your 5G Dwelling Web If You Break This Rule

March 22, 2026
From Textual content to Tables: Characteristic Engineering with LLMs for Tabular Knowledge

From Textual content to Tables: Characteristic Engineering with LLMs for Tabular Knowledge

March 22, 2026
  • About Us
  • Privacy Policy
  • Disclaimer
  • Contact Us

© 2025 https://blog.aimactgrow.com/ - All Rights Reserved

No Result
View All Result
  • Home
  • Technology
  • AI
  • SEO
  • Coding
  • Gaming
  • Cybersecurity
  • Digital marketing

© 2025 https://blog.aimactgrow.com/ - All Rights Reserved