<link rel="stylesheet" href="https://53.fs1.hubspotusercontent-na1.net/hubfs/53/hub_generated/template_assets/1/207928094053/1774035316615/template_footer-core-non-critical.min.css">

The Hustle x Trends

AI for Business Builders

A 4-step crash course on using AI to supercharge your venture

Unleash your business’s potential with Breeze — HubSpot’s AI tools, seamlessly integrated across the CRM.

C’mon. All the cool kids are doing it.

As a business builder, you must be curious.

How can you leverage this fantastic beast to grow your business? How much should you invest in AI, if at all?

And most importantly...where do you start?

If you're not technical, it's easy to get lost in the nitty gritty. That's why we're here — to spell out what it all means, and how you can level up in this game.

What You'll Learn

The Deal with LLMs

FINAL_AI Deep Dive-1

To understand LLMs, just think about input and output.

As an entrepreneur, you need to know how to communicate with LLMs — or modify your input — for the optimal output that will be useful for your business.

We are living through one of the greatest paradigm shifts in human history.

We now have the compute power, the architecture, and algorithms to have meaningful natural language conversations and glean unique perspectives with our data.

Prompting is programming. Conversation is the new interface.

 

The Opportunity

In 2023, large language model developers raised approximately $12 billion in equity funding across 10 deals — 12x as much as last year. That's some serious cash influx.

Notable LLM developers' funding:

  • Open AI: $10B
  • Anthropic: $450m
  • Cohere: $270m
  • Mistral AI: $114m

We don't need to tell you that there's tremendous opportunity in leveraging LLMs to grow your business and optimize your operations. Companies around the world are already seeing a 3.5x return on average from their AI investments.

2-Aug-06-2025-07-41-52-5331-PM

The question is...

Where should you start? 

We tapped two experts to spell it out:

Stairs

There are 4 levels of working with LLMs.

Each level builds a foundation for the next, in the sense that it provides you with the knowledge and skills you need to use the next one. For levels 2-4, you will need a developer (or a team of them) to help you.

Prompt Engineering

Anyone can do it. It’s a free or low-cost way to improve your business. Use it for:

  • Manual tasks that don’t require automation.
  • General tasks that don’t require highly domain-specific knowledge.

Level 1. Prompt Engineering

Think of prompt engineering as talking directly to a bot using really well-crafted messages. The more specific instructions you provide, the better results the LLM will put out.

When to use it

Use prompt engineering if you need help with:

  • Manual tasks that don’t require automation
  • General tasks that don’t require highly domain-specific knowledge

Why it matters

Prompt engineering was all the rage (and everyone’s dream job) for a hot minute. Now, it might not be the replace-all profession of the future, but there’s plenty to gain by mastering this skill.

Crafting a good prompt can take a few minutes to several hours, but the overall savings in time and gain in productivity still yield a fruitful ROI.

It's useful in different areas: 

Marketing

  • Idea generation
  • Copywriting
  • Content creation
  • Proofreading Product descriptions

Customer Experience

  • Responding to reviews
  • Feedback and surveys
  • Onboarding

Human Resources

  • Scanning resumes
  • Writing job descriptions
  • Performance reviews

Operations

  • Scheduling appointments
  • Refining business plans
  • Data analysis

Expert Tip

Simply prompt engineering public-facing LLMs (like ChatGPT or Claude) can go a long way for your business.

Done right, it might be the only step you ever need.

 

How It Works

Prompt engineering is not as easy as asking ChatGPT a bunch of questions.

You need to design and construct prompts to give clear, concise, and effective directions to the LLM.

The prompt can be a question, a statement, or a piece of code. It can include keywords or examples.

Basically, your goal is to provide the LLM with enough information to understand what you want it to do, while avoiding ambiguity or misleading the model.

Like teaching humans, there are tried-and-true techniques to help LLMs learn better and faster.

Let's look at some examples below.

Crash Course

Prompting Techniques

Click on the Example tabs to see how to apply each technique in real business cases

With few-shot prompting, you give the LLM a few "shots", or examples, so it can generate answers in a style that aligns with those examples.

It's an upgrade from zero-shot prompting, where your instruction to the LLM has no information about the desired output (you ask it to perform a task without providing any example solutions).

Say you want to create meeting summaries in a fixed format, with specific subheaders. You can provide a few sample summaries in your prompt, so the LLM can follow the same format as it summarizes more meetings.

This technique is helpful for tasks with limited amount of training data, or hard-to-define tasks.

It's a prompting technique that helps LLMs reason better, learn how to think logically and solve complex problems.

When prompted, the LLM-generated chain of thought mimics the thought process we humans go through to solve a multi-step problem, but much faster.

Store away the whiteboard — you can now ask AI to think step-by-step for you.

Ask the model to generate multiple responses to the same prompt, and compare them for consistency. By requesting the LLM to be consistent in its answers, you can reduce potential bias or noise in the outputs.

This technique is designed to improve the performance of LLMs on problems that require multiple reasoning paths.

For example, you can ask the LLM to generate multiple replies to the same customer query. Comparing the responses helps ensure consistent and accurate information is provided to your customers.

This technique helps the LLM learn to identify the most consistent answer, even if there are multiple possible answers. It improve the overall quality of the LLM's responses by ensuring coherence and accuracy in the generated content.

Iteratively test and refine prompts based on the model's responses. It's basically a back-and-forth with AI — you start with a prompt, see what it gives you, then tweak your ask based on the response. Rinse and repeat until you get what you’re looking for. It’s like shaping the answer one step at a time. 

There are many other prompt engineering techniques, and new ones emerge everyday.

General knowledge prompting, reverse prompting, ReAct prompting...to name a few.

You can easily find resources online if you want to go down the rabbit hole. There are also marketplaces where you can buy ready-made prompts for cheap.

If you own a small business, or just starting out, it's likely that prompt engineering will get you the results you want.

It's a free — or low-cost— way of pumping up your biz with AI. The experience you gain will also help you level up down the line, if your business growth requires it.

 

Level 2. Prompt Engineering with API

Prompt engineering with API is still like talking to a bot with well-crafted messages, but instead of talking directly, you do it through an API, or a translator.

Based on your specific needs, the right API will increase the precision of the LLM’s output.

When to use it

Move on to this level if:
  • You’re not getting satisfactory results by prompt engineering in public-facing LLMs
  • Your data set gets too large
  • You’re working with highly domain-specific knowledge (think medical, legal, financial...)
  • You have a very specific process flow

Why it matters

Prompt engineering with API lets you:

 

How It All Works

Let's look at what these technical-sounding concepts mean, and how you can adapt them in your business.
FINAL_AI Deep Dive (1)-2

If you want to set the tone, style and persona of all responses generated by the LLM, you can set a System Role with the OpenAI API (a good choice for general-purpose prompt engineering tasks).

Think about the "apply to all" filter in your calendar or email. This is how the system role works — it sets an overarching rule that applies to every subsequent action by the LLM.

Doing so will give you powerful control over inference, guiding the LLM's behavior for more targeted and effective responses.

hyper

Imagine a control panel full of dials and buttons. It gives you the ability to adjust the settings of a LLM to control the quality of its output. 

Want it to be highly creative, or nice and moderate? Make it a chatterbox or a mysterious monosyllabic? You can do it all by playing with the hyperparameters.

vector db

A vector database is like an Amazon inventory system. Amazon needs to know what's in stock, where it is, and how many. The inventory system keeps track of all this, so employees can quickly find what customers want and restock when necessary.

A vector database is like that, but for words, sentences, and meanings. It keeps track of how words and ideas are related to each other in a mathematical way.

Just like Amazon uses its inventory to quickly find products and keep customers happy, an LLM uses a vector database to quickly find and understand the relationships between words and ideas, helping it communicate and understand language just like a human.

How it works

Let's bring back the input-output graphic.

When you submit an input, your API will turn it into a vector, then go into the vector database to look for the closest match.

Once it finds one, it will prepend the match to your input and pass the whole thing to the LLM. At that point, the LLM will be able to create an answer based on that data.

AI agents provide applications a fundamentally new set of capabilities, including the ability to solve complex problems and interact with the external world.

An AI agent typically has access to a suite of tools and determines which ones to use depending on the user input. These agents can employ multiple tools simultaneously or sequentially, using the output of one tool as the input for the next.

This process of creating sequences of outputs and inputs is known as "chaining", and it is a potent technique for problem-solving in complex scenarios.

Level 3. Fine-Tuning

Fine-tuning is a technique where a pre-trained model is adjusted to perform a new task. You're teaching the LLM how to use the knowledge that it already has to do something new.

It's basically prompt engineering with API, on steroids.

When to use it

Move on to this level if: 
  • You have large quantities of domain-specific data, and prompt engineering with API didn't yield satisfactory results
  • You have the budget — when your training data gets large enough to demand fine-tuning, the cost can add up quickly (we’ll get to that later)

Why it matters

Working with a fine-tuned model is like having a super expert with decades of experience in your industry.

All the knowledge is built inside their minds - they don’t need to do research or find stuff in the inventory system (vector database).

You can fine-tune an LLM to do very specific tasks in your industry niche.

Examples
Email spam detection Customer support chatbots
Document summarization Social media monitoring
Fake news detection Resume screening
Legal contract analysis Content moderation
Stock market analysis E-commerce product description
Language translation Virtual assistants
Personalized education Artificial creativity

How It WorksFINAL_AI Deep Dive (2)

Cost

In most business use cases, fine-tuning an LLM would cost anywhere from $50k-$300k. The actual cost of fine-tuning could be ~$1k, but there are many other associated costs.

Cleaning and annotating your data can run in the low thousands to $100k+. The same price range applies to integrating the model into your existing ecosystem.

If you want real time insights without compromising quality, finding the best model fit and subsequent optimizations also add to the price.

Don’t forget that you need to employ machine learning engineers to test, evaluate, and repeat the process until it works properly. For complex cases, it could take them six months to a year. These engineers are typically paid over $250k annually -- you do the math.

You can save some money by using open source models. Hugging Face is an AI community with open-source tools, resources, and pre-trained models to help you fine-tune a model for your specific task.

Fine-Tuning Examples

You can fine-tune a model to provide top-notch customer service.

By fine-tuning the model using your company's own support ticket data, you can create a sophisticated AI chatbot that understands and responds to customer queries more effectively and efficiently.

The fine-tuned LLM can help automatically classify and triage incoming support tickets, responding faster and allocating resources better. It can provide instant, accurate answers to FAQs, reducing the workload on support agents and improving customer satisfaction.

It can also empower your human customer support team by providing them with templates and tips to better handle user complaints.

Level 4. Building Your Own LLM

Let’s be real here - it’s unlikely you’ll need this. But if you do… Go get a good team of developers. You’re playing in the stratosphere.

When to use it

Move on to this level if: 
  • Large enterprise with a lot of proprietary data -- many companies you know are already doing this.
  • Government-specific use cases that must be trained.

Building your own LLM is expensive and time-consuming, but will indeed provide the exact results you want.

Final Thoughts

If you remember nothing else from this guide, remember these:

  • Think about large language models in terms of input and output. With the tips and knowledge in this guide, you should be able to optimize your input (prompts) to get desirable results from the LLM. Prompting is programming. Conversation is the new interface.
  • Simply prompt engineering goes a long way, especially if you play with the techniques that help LLMs learn better and faster. Provide clear, concise instructions and examples to make sure you get the desired output. Prompt engineering is a new and evolving field, and new techniques emerge everyday.
  • Using APIs gives you more control over the LLM’s behavior and lets you work with large quantities of data. You can set a Systems Role or tweak the hyperparameters to adjust a model’s “personality.”
  • If you have large amounts of highly domain-specific data, clean up and annotate them. It will be critical to fine-tuning an LLM to serve your industry niche. With fine-tuned models, you can build sophisticated chatbots that know everything about your company or products.
  • Whether you run a large enterprise or a mom-and-pop shop, you could benefit from using LLMs. Use it to automate manual tasks, streamline processes, analyze data, or generate content. If used right, the LLM can cut your customer support team in half, 3x your content generation goal, market to your audience better, ship new products every week, and many more.
human_robot_hand

Our Take

  • In the near future, anyone sitting on a wealth of data should put it to use with vector database or fine-tuning.
  • Artificial general intelligence will be the amalgamation of thousands of small, niche models — a system of interconnected models
  • We’re going to need people to build these small, niche models. People like teachers, chefs, philosophers, historians, comedians, doctors, artists, and more.
  • AI amplifies what it means to be human.

Want more resources like this?

Sign up for The Hustle, and gain access to tons of tactical business advice you can implement today!

TheHustle_HSMedia_black_logo