Understanding Generative AI and Its Distinction from Traditional Systems
Written on
Chapter 1: The Cautious Approach of Enterprises
Organizations often take a conservative stance when it comes to deploying automation technologies on a large scale. Typically, when a business adopts an AI system, one of two scenarios usually applies:
The AI application is of such a low-stakes nature that it poses no significant risk to the organization. This often means it won't be utilized to automate crucial tasks since AI systems are prone to errors. In practice, this situation is uncommon because the investment required to develop, implement, and sustain AI solutions makes it challenging to justify their use when the application is inherently trivial.
The AI application is vital to the organization's strategy, yet its effectiveness warrants the costs and heightened risks associated with potential errors.
This paragraph will result in an indented block of text, typically used for quoting other text.
Section 1.1: Characteristics of Traditional AI Systems
Most traditional enterprise AI systems from the last decade fit into the second category. Their decision-making processes depend on their effectiveness and whether the cost of mistakes is acceptable. Therefore, a general guideline is that if a company has implemented an AI system at a large scale (rather than in a limited experimental setting), you can expect the following features:
- A clearly defined use case
- A vision of the expected “correct behavior” of the system
- A specific, singular goal
- Measurable performance metrics
- Well-established testing standards
- A good understanding of potential failures and necessary safety measures
Consult any trust and safety expert, and they will likely concur that these factors facilitate their responsibilities. I've previously elaborated on why the third point is especially crucial, and I will summarize it here:
It is significantly simpler to safeguard a diverse user base from a system designed for a singular purpose rather than a multi-faceted one.
Section 1.2: Generative AI vs. Traditional AI
So, how does all this compare to generative AI systems, particularly large language models (LLMs)? This is where complexities arise.
From the viewpoint of those who create foundational LLMs, the principle remains the same: there is an optimal behavior that the system is designed to achieve. For instance, outputs such as nonsensical text or repetitive phrases are considered incorrect by LLM developers. Need a tangible example? Check out CatGPT (cat-themed, not chat) and see how long it retains your attention. The superior performance of ChatGPT in generating coherent sentences compared to CatGPT is no coincidence. It requires deliberate development aimed at a clear target, along with rigorous testing that is impossible without a defined notion of correct behavior, assessment criteria, and performance metrics.
A representative screenshot of CatGPT’s scintillating conversational capabilities.
From the perspective of teams that create foundational large language models, every element from that earlier list is present, including safety measures. To observe one in practice, simply pose a medical question to an LLM. Safety protocols are the reason the answer will contain disclaimers such as, “consult your doctor,” “it's essential to see a healthcare professional,” or “I'm an AI, not a medical expert.” For example, here's how LLMs respond to a recent health inquiry of mine:
Safety nets in action. Google Bard is on the left, ChatGPT is on the right, and Humpty Dumpty is your author. By the way, I intentionally used a vague prompt for simplicity. To use these systems effectively, specificity is crucial. Provide details: How did the injury occur? When did it happen? What are your current symptoms? etc.
However, there is one significant difference that relates to something not included in that list. It’s subtle yet substantial. Once you recognize it, everything becomes clear.
What could this difference be? Stay tuned for the next installment where I'll elaborate further!
Thanks for reading! Interested in a YouTube course?
If you enjoyed this content and are looking for an engaging AI course suited for both novices and experts, check out the one I created for your enjoyment:
The first video titled "Generative AI vs Traditional AI" dives into the fundamental differences and applications of these technologies.
The second video, "What's The Difference Between Generative AI And Traditional AI?" further explores these distinctions and their implications for the future of technology.
P.S. Ever tried clicking the clap button on Medium more than once to see what happens? 🤔
Liked the author? Connect with Cassie Kozyrkov on Twitter, YouTube, Substack, and LinkedIn. Interested in having me speak at your event? Use this form to get in touch.