Harnessing the power of generative AI is key to unlocking new levels of efficiency, personalization, and innovation. From tailoring customer experiences through personalization, to automating content creation, and supporting employees with AI-driven chatbots, the applications of generative AI are diverse and impactful. This is where generative AI apps on enterprise data have a huge scope in the coming years.

This post was originally published in Prophecy.

Use Case: Generative AI Apps on Enterprise Data

Problem Statement

Problem Statement


While producing content from generative AI for an enterprise, the knowledge warehouse of past conversations and interactions is usually lost, leading to inefficient use of the foundational generative models.

Realization Approach

Realization Approach


A prompt based generative AI app can harness the enterprise wide knowledge and channelize it through a dedicated ETL pipeline to build and optimize the knowledge warehouse leading to continuous advancement of foundational models.

Solution Space

Solution Space


With an ever expanding list of prompts depicting intricate and nuanced interactions, the model progressively improves its relevance and context in the generated output content while answering domain specific queries.

In this post, we’ll explore how to build a generative AI application that leverages your enterprise data and how prompt engineering can not only ensure model performance, but also enable it to generate more accurate, relevant, and compliant responses.

The Basic Build Process for Generative AI Apps

Creating generative AI applications is a step-by-step process that transforms innovative ideas into practical solutions. This approach involves a series of structured steps, ensuring the successful development and deployment of applications that leverage the power of generative AI. The core components of this approach include:

  1. Identify the use case: Start by identifying the use case that aligns most effectively with your enterprise’s goals. Prioritize the use case that can offer immediate value and impact.
  2. Choose a model: Select an AI model that suits the identified use case. You have three options: 
    • Build a custom model from scratch
    • Specialize an existing foundational model
    • Utilize prompts to guide the AI’s output (this is our recommendation) 
  1. Produce: Based on the selected model, use the appropriate methods to generate content. (This step’s specifics are influenced by your model selection)
  2. Integrate: Merge the generative AI model seamlessly with enterprise data to ensure relevance and coherence in the output.
  3. Iterate: Test and refine the application iteratively. Ask for user feedback and enhance the processing accordingly.
  4. Publish: Take your refined application live, making it accessible to all users.
  5. Monitor: Track application performance, documenting any errors encountered. Continuously test and evolve the application to adapt to changing user needs and any potential technology advancements.

Choosing a Model for Your Enterprise AI Application

The process of selecting an appropriate model for your enterprise AI application is a critical step that significantly influences the success of your project. There are three types to choose from:

Build your own: This option, while viable, often proves to be impractical due to the incredibly high costs and resource requirements needed for development and maintenance.

Specialize an existing model: Creating and managing proprietary models based on established frameworks (like GPT-3.5) is a popular choice, but this route can also be resource intensive and expensive. 

Leverage prompts: Prompt-driven models offer a practical solution for the majority of generative AI applications in an enterprise context. This approach is highly recommended for its efficiency, versatility, and ease of implementation. 

Why we recommend the prompt approach

So why do we recommend the prompt approach for building and training generative AI apps on enterprise data? There are a number of reasons, which we will dive into.

Continuous advancement of foundation models: The foundational models of generative AI applications are continually advancing and getting better. Through prompts, you can more easily – and economically – harness these improvements, without the need to do any extensive model adjustments.

Expanding prompt sizes: The size and scope of prompts are expanding, allowing for more intricate and nuanced interactions – which are critical for models to perform domain specific tasks. This enables you to create richer and more diverse outputs, dramatically increasing the overall quality of generated content.

Simplicity, cost-efficiency, and speed: Opting for the prompt approach streamlines the development process. It’s a straightforward method that sidesteps the complexities and expenses associated with building or customizing models from scratch.

How to Build Generative AI Apps on Enterprise Data with Prompt Engineering

In this section, we’ll walk you through the process of building a generative AI application utilizing the prompt engineering approach. This method offers a structured framework for crafting effective AI solutions. At its core, it revolves around three essential components:

  • Knowledge warehouse – A repository for unstructured data sources, encompassing a variety of inputs such as documents, Slack messages, and support tickets. This reservoir serves as the raw material from which the AI derives insights and generates content.
  • Batch ETL pipeline – An essential tool for constructing and maintaining the knowledge warehouse. The Batch Extract, Transform, Load (ETL) pipeline automates the process of gathering, transforming, and storing data, ensuring the knowledge warehouse remains current and relevant.
  • Enterprise data integration – This component focuses on seamlessly merging your generative AI model with pertinent enterprise data. The outcome is an application that harmonizes AI-generated outputs with the wealth of insights stored within the knowledge warehouse.

Knowledge warehouse

At the heart of constructing effective generative AI apps on enterprise data is a knowledge warehouse, which is designed to complete three vital functions: 

1. Document storage: Serving as a digital vault, the knowledge warehouse houses an array of unstructured data sources, encompassing documents, messages, support tickets, and more. This reservoir becomes the primary source from which the generative AI system extracts valuable insights.

2. Document search: The knowledge warehouse’s architecture facilitates rapid and precise document retrieval. This functionality enables the generative AI to swiftly access the relevant information required for crafting its outputs.

3. Indexing: By employing indexing mechanisms, the knowledge warehouse organizes and categorizes the stored data. This organizational structure not only accelerates data retrieval but also enhances the generative AI’s comprehension of the stored content.

When it comes to implementing a knowledge warehouse, there are several options available. These include leveraging advanced vector databases like Pinecone, Weaviate, and Milvus, known for their ability to manage complex data. Alternatively, open-source search engines like Elasticsearch are cost-effective and reliable. The choice ultimately depends on your enterprise’s specific requirements and resources, but there are solid options to pick from.

Batch ETL pipeline

The ETL pipeline is a linchpin to the successful creation of a knowledge warehouse, as it handles the critical task of populating the repository with data. This process involves extracting data from various sources, transforming it into a structured format, and loading it into the knowledge warehouse so it can be utilized. 

For this purpose, Apache Spark is great for its exceptional ability to handle unstructured data. However, to elevate your capabilities further, we recommend harnessing the synergies of Spark and Prophecy’s low-code data engineering platform, which seamlessly integrates with the Databricks data architecture. This combination offers a modern, enterprise-grade solution, capable of quickly building and deploying the ETL pipelines essential to powering your generative AI applications.

Enterprise data integration

The critical step of integrating enterprise data into your generative AI ecosystem typically requires some level of orchestration. There are two general approaches to this: the app-based approach and the streaming-pipeline approach.

The app-based approach, simply put, is just that: using often customized software applications to manage the integration process. These apps are designed to streamline the flow of data from disparate sources and enable seamless integration with your model. The steps of processing and cleaning of data can be part of this, but also requires customization.

The streaming-pipeline approach involves using real-time data processing pipelines that continuously stream and process data from a variety of sources. This enables the model to receive and analyze data in near real-time, allowing for dynamic and responsive generation of content.

Although each is effective, the streaming-pipeline approach is highly recommended. This method ensures seamless integration without the need for intensive development. There are a range of existing, ready-made components that are optimally created for tasks like data processing and cleaning. 

Considerations for Building Generative AI Apps on Enterprise Data

There are a number of considerations to keep in mind when developing generative AI solutions on top of enterprise data – but each has a viable solution that helps ease the process. 

These often include: 

Cost: Building infrastructure from scratch is a resource and time intensive path that requires developers with highly specific skill sets, which can lead to escalated costs.

How to solve: Exploring no-code options presents a cost-effective alternative, reducing the dependency on dedicated developers.

Time: The journey from conceptualization to building, testing, and deploying generative AI applications is inherently time-consuming.

How to solve: By automating various stages of the development cycle, enterprises can expedite the overall process, optimizing time-to-market.

Accuracy: Enterprise data can be complex, requiring thorough validation to support the quality and reliability of any generated insights.

How to solve: Using automated data validation mechanisms can ensure accuracy and efficiency and also lower the burden on development resources.

How to Build an Enterprise AI App with Prophecy

To alleviate the previously described challenges, Prophecy offers a Generative AI platform that is designed to streamline the development of powerful generative AI applications on private enterprise data. Let’s take a look at what is offered.

Source Components: Prophecy provides pre-built source components that facilitate data extraction from diverse sources including documents, databases, and application APIs like Slack. This feature eliminates the complexity of data aggregation, ensuring a seamless flow of information.

Orchestration: Automating the process of populating your knowledge warehouse with current data is made effortless through Prophecy’s orchestration capabilities. This ensures that your generative AI application operates with the most up-to-date and relevant data.

The app production phase encompasses a structured sequence of actions, designed to bring your generative AI application to life with precision and efficiency. This phase neatly divides into two pivotal steps:

Step 1: Unleash ETL on unstructured data

The initial step involves the orchestration of data pipelines, facilitating the seamless movement of private and unstructured data. These pipelines direct data into a vector database, or open-source solutions such as OpenSearch or Elasticsearch, ensuring a structured repository to draw insights from.

Step 2: Construct a streaming ETL pipeline for inference

The second step involves the creation of a streaming ETL pipeline, tailored to drive model inference. This process involves the incorporation of Large Language Models (LLMs), enabling direct responses to end user queries while simultaneously presenting relevant documents sourced from the knowledge warehouse.

By leveraging Prophecy Generative AI platform to perform these two steps, your enterprise can ensure the operational excellence of your generative AI application, and have confidence that it will generate precise and relevant insights and responses.

Ready to Put Your Knowledge to Work? 

For a more comprehensive understanding of ETL modernization and its impact, we would love to share the following thought leadership piece: Understanding ETL Modernization: Everything you need to know to accelerate ETL modernization with low code Spark

You can also explore the potential of generative AI by investigating the Prophecy Generative AI platform and see how you can build your own generative AI application on our enterprise data in hours.


Check out the Artificial Intelligence Use Case Master Index to know about other interesting use cases of AI.

About the author 

Radiostud.io Staff

Showcasing and curating a knowledge base of tech use cases from across the web.

TechForCXO Weekly Newsletter
TechForCXO Weekly Newsletter

TechForCXO - Our Newsletter Delivering Technology Use Case Insights Every Two Weeks

>