An AI Primer for marketing leaders

What is Generative AI?

Table of Contents

On November 30, 2022, the Merriam-Webster word of the day was quiddity. It may as well have been ChatGPT. From that day on, that’s all anyone could talk about.

Within months of its late-2022 release, OpenAI’s eerily humanlike chatbot attracted over 100 million users. It sparked an AI arms race among Microsoft, Google, Meta, and hundreds of little-known AI startups. It spewed out dubious advice, stated obvious falsehoods, and came under fire for ingesting reams of intellectual property without regard for rights or ownership.

Still, the introduction of ChatGPT thrust a powerful, new form of artificial intelligence — generative AI — into the mainstream. Experts predicted that this technology alone would transform entire industries, contribute billions to the world economy, and change the course of human history.

And of course, with its ability to produce content at unprecedented speed and scale, generative AI would surely remake marketing. Right? But all of this has left many marketers torn between the fear of missing out and the fear of messing up.

For this reason alone, it’s essential that marketing leaders gain a solid grasp on what generative AI is and why it’s so important. More broadly, though, it’s worth understanding how GenAI fits within the larger context of AI in general and what makes it different from other forms of this technology.

To be clear, as a marketing leader you don’t need to be a data scientist. But it does help to have a working knowledge of key artificial intelligence concepts and terminology — especially if ChatGPT is the first (or only) thing that comes to mind when anyone asks you about AI.

This plain language explainer aims to provide you with that working knowledge. So, let’s start with the big picture. We’ll define key terms along the way.

What is AI?

Artificial intelligence (AI) is a branch of computer science that aims to create systems capable of performing tasks typically requiring human intelligence, such as understanding language, recognizing patterns, and solving problems. Unlike traditional software that follows specific instructions written by programmers, AI systems employ techniques like machine learning to evolve and enhance their performance through experience. Generally, AI functions by analyzing data, identifying patterns, and making decisions based on those patterns to achieve tasks without direct human guidance.

The field of AI is hardly new. A computer scientist named John McCarthy first coined the term in 1956, and the field has developed in fits and starts over the ensuing decades, leading up to this latest peak of AI promise, productivity, and — let’s face it — hype.

Early AI looked nothing like today’s systems. Early techniques focused on the development of so-called expert systems that aimed to replicate human knowledge in a given domain by using a set of coded rules. Though limited in their capability, these knowledge-based systems could still prove useful in supporting human decision-making. By the 1990s, though, many AI researchers had turned their attention from rules-based logic to the development and use of machine learning algorithms.

Machine learning enables AI systems to adapt and improve using algorithms that learn from the data they process. Rather than depending on rigid, pre-written instructions, these algorithms process large amounts of data to discern patterns and relationships. As the AI system encounters more data and experiences, it updates its internal model for a better understanding of the task. This ongoing process of learning and refining its knowledge allows the AI system to become increasingly accurate and effective in performing tasks, adjusting to new situations, and making predictions.

Deep learning is a subfield of machine learning that focuses on neural networks (algorithms loosely modeled on the human brain) with many layers. These networks are capable of learning complex, hierarchical representations of data, making them particularly suited for tasks involving high-dimensional data, such as image recognition, speech recognition, natural language processing, and playing games.

Machine learning — including deep neural networks — has been hard at work inside your marketing technology stack for a decade or more. You’re already using artificial intelligence to make marketing decisions, every day.

For example:

Machine learning began to impact digital ad buying around the late 2000s and early 2010s, as programmatic advertising emerged, and real-time bidding (RTB) platforms were introduced. Since then, machine learning has become an integral part of digital ad buying, with programmatic advertising platforms like Google Ads and social applications like Facebook, Instagram, and Twitter leveraging its power to optimize ad performance and deliver better return on investment for advertisers.

By the mid-2010s, marketing automation platforms, such as HubSpot, Marketo (now Adobe), and Pardot (now Salesforce), began using machine learning for various tasks, including lead scoring, audience segmentation, and content personalization. Today, machine learning is widely used in marketing automation, helping businesses optimize their marketing efforts and make data-driven decisions.

In other words, generative AI isn’t your first brush with machine intelligence. But it is your latest, and it’s notably different from the AI that has come before.

What is Generative AI?

Generative AI refers specifically to AI systems that can take text, code, images, video, or audio as input and generate text, code, images, video, or audio as output. Generative models stand out for their ability to produce novel outputs, rather than simply analyzing inputs.

In a nutshell, generative AI generates things. Most notably, it generates content (using that word in its broadest possible meaning). And this is a unique capability that makes it unlike traditional predictive AI.

It’s important to note here that the field of generative AI goes well beyond ChatGPT, and text generation is merely one of many uses. Let’s look at some of the common forms of GenAI and a handful of representative applications associated with each.

  • Writing assistants like Jasper, Writer, Writesonic, and others can draft articles, blog posts, ad copy, outreach emails, reports, summaries, translations, and more based on text prompts. Several of these systems were made specifically for marketers and predated ChatGPT by years.
 
 
  • Video generators can produce convincing synthetic audio and video of a real or imaginary person’s voice or appearance. Companies like Metaphysic offer brands the ability to integrate synthetic celebrities into campaigns, while tools like Synthesia and D-iD generate reasonably lifelike virtual spokespeople for a wide variety of use cases. At the same time, these technologies might be misused for the creation of misleading or malicious deepfakes.
 
  • Audio-video editors like Descript, Momento, and OpusClip speed and simplify post-production, transcription, and the creation of derivative assets for social media distribution.
 
  • Audio generators like ElevenLabs convert text to audio and can clone voice, while AI music composers can generate original songs, instrumentals, or accompaniments based on desired genres, moods, and styles.
 
  • Multimodal models like RunwayML GEN-2 can process, understand, and generate information in multiple formats including text, images, video, and sound, even in a single interaction.
 
  • With existing and new players introducing new capabilities almost daily, users can now generate computer code, websites, slides, data visualizations, and more.

 

As a creative “co-pilot,” generative AI has the potential to unlock unprecedented productivity, efficiency, speed, and scale. It can help generate a lot of iterations quickly, allowing the user to select the best ideas for moving forward. For marketers, GenAI has game-changing implications and applications across everything we do: from how we understand audiences, set our strategy, produce content and campaigns, connect with customers, and accelerate outcomes.

The Technology Behind GenAI

It’s helpful to understand the technology and terminology that underly generative AI. And it might be helpful to start at the beginning. Even if most people weren’t thinking or talking about GenAI until 2020, the concept is nearly as old as AI itself.

In 1966, MIT computer scientist Joseph Weizenbaum introduced the world to a knowledge-based chatbot named Eliza, generally considered the earliest generative AI system. Running a script called “Doctor,” programmed to mimic the open-ended back-and-forth between a psychotherapist and patient, Eliza would engage human users in text-based conversations by applying pattern matching rules to simulate personalized, relevant, one-on-one dialogue. Although Eliza was quite a rudimentary chatbot, its intelligence and level of understanding often wowed people who interacted with it. Some attributed humanlike emotions to Eliza, even if they already realized it was merely a computer program.

Now clearly, generative AI has come a long way since 1966 — but you can still detect a bit of Eliza’s DNA in modern conversational systems like ChatGPT. How so?

Eliza was an early example of natural language processing (NLP) – a field within artificial intelligence focused on helping computers understand, interpret, and respond to human language in a valuable way. NLP is at work in everything from Siri to spam detection. But NLP took a giant leap forward in 2017, when Google engineers developed a new type of deep learning neural network architecture designed specifically for natural language processing tasks — Transformers. Many AI developers now use this type of architecture as the basis for generative AI systems.

Transformers don’t just process the data, as input. They weigh and prioritize different parts of the input when producing an output. In the case of natural language processing, when a Transformer reads a word in a sentence, it doesn’t just look at that word. It looks at all the other words in the sentence at the same time and uses their context to understand the word’s meaning. This is done through something called an “attention mechanism.” In other words, a Transformer can “pay attention” to different parts of a sentence to better understand it. This is a big deal because it helps computers understand and use language more like humans do.

Now, you may have already surmised that the T in ChatGPT stands for Transformer. The chat application is named for its underlying AI model — OpenAI’s Generative Pretrained Transformer — with ChatGPT itself, along with a wide range of third-party generative AI applications, running on GPT-3.5 or GPT-4 Large Language Models.

You’ve no doubt heard the term Large Language Model (LLM, for short) – deep learning algorithms that are trained on vast amounts of text data. They’re designed to generate human-like writing and can perform tasks like translation, question answering, summarization, and more. So, the GPT models are Large Language Models that use the Transformer architecture to understand the context of words in sentences.

While some tend to use LLM and GPT interchangeably, it’s important to note that GPT and its iterations are specific to OpenAI. The underlying principles though, including the use of Transformers and large-scale language model training, are at work in other LLMs like Google’s BERT, Meta’s LLaMa, and Anthropic’s Claude.

Collectively, you may hear these “big tech” algorithms described as Foundation Models. Like the foundation of a house, a Foundation Model is a base that you can build on to create something more specialized. These models are trained on a massive amount of general-purpose data to learn a broad understanding of language. They can then be fine-tuned or adapted to perform many different tasks well. This is why one application like ChatGPT can write a business article, pen decent poetry, produce working computer code, and interpret research data into charts and graphs.

But as we’ve already noted, text is only one form of data, and text generation is only one use for GenAI. If you consider the wider range of capabilities — including image generation, video generation, and editing, and so on — you’ll find other Foundation Models that aren’t LLMs and are instead built on entirely different deep neural network architectures. For example, an image generator like Adobe Firefly or Midjourney uses algorithms trained on a massive amount of visual data rather than text, even if they do use natural language processing to understand what you’re looking to create.

Still, the one thing that unites all Foundation Models is their sheer size. Only the world’s largest organizations — organizations with the required technology, talent, computing resources, and budgets — can build, train, and deploy a Foundation Model.

Most organizations — including most marketing teams — will use pre-trained models developed by large tech companies or AI research institutions. This is a more cost effective and practical approach for most marketers because it allows you to focus your resources on fine-tuning existing models for your specific use cases, rather than attempting to boil the big data ocean.

In fact, it’s even more likely that you’ll tap into GenAI models through third-party “plug-and-play” application providers that have already built the features, functionality, and workflows that support your most common marketing use cases. Your first (and maybe only) foray into marketing GenAI might be through the embedded features being introduced at-speed by established martech cloud providers like Adobe, HubSpot, and Salesforce, or any of the newer AI-first application providers.

This isn’t so different from other technology choices you’ve made over the years. After all, you probably didn’t build your own customer data platform, proprietary programmatic ad network, or marketing automation platform, instead opting for third party solutions. But that doesn’t mean GenAI technology decisions don’t come with their own, unique set of concerns and challenges. It’s important to ask the right questions to ensure that your chosen partners meet your requirements, support your use cases, and deliver the kinds of performance improvements you expect.

GenAI Challenges and Limitations

If you buy into the hype, generative AI might seem downright magical. Really though, it’s just technology. And like any nascent technology, GenAI faces its share of challenges, risks, and limitations. Let’s explore a few of the more common challenges.

  • Hallucinations: You’ve probably heard that generative AI systems can produce information that may seem factual but isn’t — a phenomenon known colloquially as “hallucination.” Think of generative AI as a guessing game of sorts. Consider a system like ChatGPT that produces the most probable sentence based on its training data without knowing whether the information contained in that sentence is true or false. This can pose serious risks when marketers rely on these systems and the output they produce for decision-making, research, or content creation without adequate human reviews, ownership, and accountability.
 
  • Harmful Content: Most generative AI models are trained on information scraped from the internet. With so much inappropriate material on the internet, AI models can easily be contaminated with offensive material. This is a challenge the technology companies themselves aim to address when training their models and by applying guardrails to their user-facing applications. Still, this can become your problem if you use a generative AI tool in live customer interactions, such as website chatbots, or don’t have adequate checks and balances in place.
 
  • Algorithmic Bias: Similarly, generative AI can reflect human biases from its training data. Biased AI could generate insensitive, offensive, or unethical content that damages your brand. For example, an AI chatbot could give discriminatory responses to certain groups of customers, or AI-generated advertising might inadvertently reinforce a harmful gender or ethnic stereotype. In some ways, bias in AI systems is unavoidable, but marketers can take steps to mitigate it by diversifying training data and testing extensively before and during deployment.
 
  • Intellectual Property: There’s a risk that employees interacting with AI systems might inadvertently leak your company’s intellectual property or other proprietary information (like your product roadmap, new messaging strategy, or upcoming ad campaigns). This is a particularly big risk when using publicly available generative AI tools like ChatGPT. Any information entered into the system may be stored and incorporated into the training data set for the underlying model.
 

There’s also a risk that the output generated by the AI tool might violate someone else’s intellectual property, exposing your company to potential lawsuits and reputational risks. And, as of this writing, the jury is still out on whether content created by or with a GenAI system is protected under copyright, patent, or trademark law. Until this is settled, using AI-generated content might be akin to putting your marketing copy or imagery into the public domain.

 
  • Over-Reliance: The makers of today’s popular GenAI systems make some big promises. Promises that themselves may not be entirely true. Marketers who uncritically accept these claims may find themselves relying too heavily on a GenAI system to produce high quality, polished, on-brand work. Or to generate wholly original thinking capable of breaking through competitive clutter. The truth pales in comparison. Because GenAI is trained on massive amounts of existing content, its output tends toward ‌a “mediocre middle.” It always benefits from a human touch. In short, GenAI is an assistant, not a boss.

Overall, GenAI offers powerful ways to create helpful and engaging new content at scale. But without an active, engaged human marketer in the loop (and ultimately in control), you’ll run the risk of settling into a “mediocre middle” characterized by me-too marketing at best and brand safety issues at worst. And ultimately, any implementation of artificial intelligence needs to consider ethics, responsibility, and governance from the very start.

Put Generative AI to Work

The most exciting possibility with generative AI is an approach we call ‘Human+’ creativity, where human insight, imagination, and direction is paired with the capabilities of this cutting-edge technology. Instead of thinking of it as marketers vs. machines, the winning formula is marketers multiplied by machines.

GenAI delivers impressive boosts in productivity, raises the floor for everyday creativity, and amplifies performance. When guided by human insight, intent, and expertise, generative AI becomes a launchpad for human creativity rather than a limiter. It gives marketers creative superpowers — being able to generate hundreds of novel ideas in the time it used to take to create one. Humans drive the strategic and creative direction, selection, and polish, while AI greases the wheels.

CognitivePath is the leading generative AI consultancy for marketing organizations
Want to know more about putting generative AI to work in your organization?