Making Generative AI Accessible for All Enterprises — A Highly Achievable Low-code Approach

Prophecy CEO Raj Bains explores three options for applying to enterprise data, each with its pros, cons, and key takeaways. Introducing the organization’s LowCode Generative AI capabilities that facilitate the rapid development of high-quality pipelines, empowering data users of all skill sets.
Making Generative AI Accessible for All Enterprises — A Highly Achievable Low-code Approach
Published on

The excitement around generative AI is reaching a fever pitch. This technology has made its way into the mainstream and has begun to affect everything from simple internet usage to the most complex data-driven tasks. While generative AI is not exactly new, these recent developments have propelled it to new heights starting a sort of “generative AI arms race” with all major technology players taking part.

Notable efforts include Google’s Pathways Language Model (PaLM) launched in April of 2022 and Meta has been working on its own model called LLaMA (Large Language Model Meta AI). However, it was OpenAI's ChatGPT that truly made waves and became the industry's killer app. Bursting onto the scene in November 2022, ChatGPT quickly gained momentum, amassing a staggering 100 million monthly active users (MAUs) in just two months, setting a record for the fastest-growing user base.

The demand for generative AI has skyrocketed, transforming how enterprises approach artificial intelligence and its potential applications. The narrative surrounding generative AI has evolved just as fast as its general adoption. User activity has gone from playful requests for jokes to writing haiku, to where we've now reached - ChatGPT is being tasked with crafting entire blog posts about an incredible variety of topics (no, ChatGPT did NOT write this one).

Just as company leaders have begun to realize the vast potential of generative AI, a chorus of thought leaders have also emerged that are highly optimistic about this technology. For example, McKinsey has boldly declared that generative AI has the power to inject trillions of U.S. dollars of value into the global economy. Trillions! This movement seems unstoppable - but there are early challenges.

The Challenge With LLMs

Off-the-shelf LLMs, such as the GPT series, are typically trained on publicly available data until a certain date. (As of the writing of this article, that is September 2021.) This means that while they can provide answers to questions like, "What are the best places to vacation in Laos on a budget?" they lack the knowledge and understanding that can be derived from enterprise data.

For businesses, this limitation becomes clear when you attempt to use LLMs in the context of the enterprise and ask basic questions about internal HR policies or proprietary information.

The question then arises — Why shouldn't the transformative power of generative AI be harnessed for enterprise data, enabling organizations to leverage the full potential of these models within their specific contexts?

Choices are Available, but Which Is the Right Fit for You?

For organizations that are serious about pursuing LLMs as a part of their strategic business use cases, there are essentially three options for applying generative AI on enterprise data. The three options, their pros, cons, and takeaways are as follows:

Option 1: Build your own foundational model

  • Pros: High accuracy, completely customizable, can be scaled and adapted to evolving business needs

  • Cons: Very expensive, requires specialized technical skills, potential performance gaps compared to pre-trained models

  • Takeaway: This is a viable option only for the Googles and the Metas of the world and it doesn’t make a lot of sense for the vast majority of enterprises.

Option 2: Specialize or fine-tune a foundational model

  • Pros: Faster than building an LLM, less expensive, can be fine-tuned to align with specific business requirements

  • Cons: Still requires specialized skills and equipment, and lacks real-time relevance as models are typically trained in batch

  • Takeaway: It can make sense for organizations that are well equipped (with in-house resources and expertise) to both undertake the fine-tuning process effectively as well as identify and fix potential issues and ensure ongoing optimization.

Option 3: Prompt engineering

  • Pros: Easier and more user-friendly, provides greater flexibility, and the least expensive option

  • Cons: Slightly lower precision, ‘pay-per-token’ pricing can be complicated, poor prompt design can lead to inaccurate or undesired outputs

  • Takeaway: This option will work for the vast majority of enterprises. Users of all technical skills can be immediately productive, achieving desired outputs more quickly and with fewer iterations, saving time and computational resources.

Of these, the option to perform prompt engineering is the choice that makes the most sense for organizations of all types as this requires the least from a resource and technical expertise standpoint. A user simply sends the context needed along with a question in a natural language format directly to a model.

Additionally, with prompt engineering, you’re able to treat generative AI applications as data and ETL problems, which are much more relatable and accessible to the majority of data users, instead of AI or data science problems, which can really only be effectively solved by organizations with deep resources and highly trained personnel.

Prophecy’s Vision for Generative AI

We founded Prophecy with a few core beliefs:

  • Technology is measured by business value delivered. Not buzzwords, not stacks-du-jour, and certainly not tooling that won’t work for everyone.

  • Self-service is king. The more you can empower every data user to serve themselves, the faster they can be productive and the less pressure you put on limited, specialized resources like data engineers.

  • Collaboration is queen. Empowering data professionals to work together seamlessly allows your organization to unlock the true potential of your data quickly and efficiently.

These beliefs led us to develop our visual, low-code capabilities, which enable data users of all skill sets to build high-quality pipelines quickly and easily. Our visual data pipeline builder automatically generates 100% open-source SQL and Spark code based on engineering best practices, for easier customization and modification as needed.

Building off of the success of our low-code data engineering, we are excited to bring to market the Prophecy Generative AI Platform. This new offering provides a simple way for organizations to power generative AI applications using their own enterprise data. With this platform, data users can use the same easy-to-use tooling and capabilities to create generative AI applications in a matter of days, ultimately enabling employees to serve themselves.

For example, an internal chatbot can be quickly built and trained on your organization’s HR policies and other unstructured documents (like support tickets and Slack messages). This chatbot can then interact with employees in a natural language format, answering questions such as, “What is my organization’s policy on time off?” Freeing up your limited HR resources to work on higher-value tasks. No specialized skills or resources are required and this can be done with just a single data engineer and application developer.

The Prophecy Generative AI Platform will democratize generative AI development and further unlock the potential of enterprise data, and we’re excited about the impact it will have!

Get Started With Data Copilot Today

Watch the webinar with Prophecy Co-founders and Kevin Petrie of the Eckerson Group to learn more

Related Stories

No stories found.
CDO Magazine
www.cdomagazine.tech