Photo credit: www.entrepreneur.com
Interest in generative AI remains robust, yet many organizations are hesitating to fully implement it due to emerging risks. A recent study within the manufacturing sector indicates that rising concerns about potential hazards are causing manufacturers to reconsider their deployment strategies.
This article delves into three critical blind spots that, if overlooked, could lead to significant problems. Before we explore these issues, it’s important to recognize that generative AI functions differently from traditional technology.
Understanding the Unique Nature of Generative AI
Generative AI is distinct for several reasons:
- It operates on neural networks inspired by the human brain, which itself is not fully understood (source).
- Generative AI utilizes large language models (LLMs), which are trained on vast datasets. The content and approach to transparency within these models can vary significantly among different generative AI solutions.
- Experts still grapple with comprehending the operations of generative AI, as highlighted by MIT Review.
While generative AI holds tremendous capabilities, it is also laden with uncertainties. By increasing awareness of its pitfalls, organizations can better manage the risks associated with its use.
1. The Rising Demand for Transparency
There is an increasing expectation for organizations to be transparent about the application of generative AI, voiced by governments, employees, and consumers alike. Inadequate preparedness in this aspect could expose companies to legal repercussions, loss of clientele, and other severe consequences.
The global landscape is witnessing an uptick in regulations concerning generative AI, with the European Union spearheading initiatives like the AI Act. To align with such regulations, companies must clarify when and how they utilize generative AI technologies, ensuring that they are not replacing humans in critical decision-making roles or perpetuating biases.
Moreover, employees and customers deserve clear communication about generative AI’s role in processes such as hiring. Transparency is vital; candidates and relevant team members should be informed when AI is used in evaluations (for deeper insights, refer to this comprehensive guide).
Additionally, companies should disclose their use of generative AI in customer interactions, whether through text, voice, or chat interfaces. One effective method is incorporating these disclosures into company policies, as illustrated by Medium’s content policy. Another approach could be to indicate AI-generated responses in the customer experience, as AWS demonstrates by showing AI-generated abstracts.
2. An Increasing List of Inaccuracy Challenges
The adage “garbage in, garbage out” applies thoroughly to generative AI. However, the nuances of how inaccuracies may arise in this context are evolving.
- Misapplication in Calculative Tasks: Generative AI proves to be unreliable with numerical tasks, as evidenced by various reports on its mathematical limitations. Users should augment AI’s capabilities with alternative methods for tasks requiring precision.
- Quality of Input Data: If an LLM is built on flawed, outdated, or biased information, the ramifications for businesses can be substantial. The current trend of reputable content providers, such as The New York Times and Condé Nast, withdrawing their content has led to a 50% reduction in available data for generative AI.
- Content from Internal Sources: Organizations often need to train generative AI on their own data. If this training data falls short of established content standards, is outdated, or contains errors, the risk is amplified.
Research shows that firms with a mature approach to content operations can leverage generative AI more effectively, as they maintain systems for documenting standards and ensuring quality (source). Organizations that currently lack such frameworks should note that it’s possible to implement them effectively, as demonstrated by a recent project where my team aided a major home improvement retailer in establishing comprehensive standards across relevant communication channels in less than three months.
Addressing accuracy challenges is not only beneficial for performance but also helps reduce the likelihood of introducing bias or infringing on copyrights.
3. The Necessity for Ongoing Maintenance
While generative AI may appear transformative, it requires consistent maintenance from both the organization and the technology provider. Deploying generative AI without a structured maintenance plan intensifies the risks associated with transparency and accuracy.
- Drift: This occurs when real-world changes render a generative AI model outdated. For example, a chatbot may provide inaccurate product information due to a failure to update with new product features.
- Degradation: Also termed model collapse, this is when the generative AI model’s performance declines rather than improves, often due to a lack of high-quality content. Ironically, LLMs can falter when fed AI-generated content.
Despite its complexities, generative AI offers organizations unprecedented potential to enhance their content capabilities. However, this power is coupled with significant risks. It is crucial for businesses to take these risks seriously when planning generative AI integration to minimize challenges and maximize success.
Source
www.entrepreneur.com