From Automation to Agency : Orchestrating AI with Flowable
Introduction
Artificial Intelligence (AI) services are increasingly used in everyday business scenarios. In Flowable, these services can be modeled as agent models and are supported as first-class citizens, alongside process, case, and business rules models. For enterprise applications, where consistency, predictability, and auditability are critical, the Flowable platform provides the necessary tools to include AI in a controlled and transparent way. The combined AI features collectively are known under the product name Flowable AI Studio.
Processes and case models can be extended with AI functionality, allowing users to define the level of autonomy given to AI services. On the one side of the spectrum this includes fully automated behavior and on the other extreme flows where human input is required to review and approve AI-generated suggestions. These patterns are typically implemented using well-known standard BPMN and CMMN patterns and constructs.
At a high level, Flowable supports four primary usages of AI when it comes to orchestration, discussed in the next subsections.
High-Level Patterns
Orchestrating AI Services
Flowable’s core capabilities include the orchestration of external services, including AI-based systems. AI services can be integrated into cases and processes using familiar BPMN and CMMN modeling constructs.
This integration is achieved through agent models, which serve as abstractions for AI services. An agent can represent anything from a simple, single-call interaction with a language model to a more complex, externally managed AI system that participates in multiple steps throughout a process or case.
See the Orchestrating AI Services section for more details.
Retrieval Augmented Generation (RAG)
Invoking AI services powered by large language models (LLM) provides significant capabilities, but these models do not include company-specific knowledge by default. Flowable supports the definition of knowledge bases using a dedicated knowledge base models. These models allows organizations to capture, process, and query their own internal content using common AI techniques such as vectorization.
LLM?
LLMs, or Large Language Models, are software systems trained on massive amounts of text data. They learn patterns in language and can generate human-like text, answer questions, summarize documents, translate languages, write code, and more, based on the prompts or input they receive.
From a high-level point of view they work by predicting the next word in a sentence, using probabilities based on the context of everything that came before.
Vectorization?
Vectorization is the process of converting text into numerical representations (vectors) that capture the meaning and context of the content. This enables efficient similarity searches and matching during AI queries and prompt matching.
When an agent model is linked to a knowledge base, it can use the natural language capabilities of the underlying large language model to interact with business-specific knowledge. This enables more relevant, contextual responses based on internal data, while maintaining the flexibility of generative AI.
See the reference documentation on Knowledge base models to learn all about knowledge base models.
Embedding an Orchestrator Agent
Beyond integrating individual AI services, Flowable also supports embedding a dedicated orchestrator agent within a case model. This agent acts as an active participant in the case, capable of providing suggestions, initiating actions, and automating steps based on case context.
The orchestrator agent remains active throughout the lifecycle of the case, enabling dynamic, AI-assisted behavior while preserving full control within the model. All core capabilities for orchestrating AI services remain available and applicable within this approach, allowing the agent to both coordinate and interact with other AI components as needed.
Check out the Embedding an Orchestrator Agent section to learn more.
Using Generative AI Features
Flowable Work and Design include built-in generative AI capabilities to assist users with repetitive or manual tasks:
- Assisted Modeling: Quickly generate process or case models—along with application scaffolding—by describing the desired behavior in natural language.
- Document Interaction: Perform actions such as summarizing, analyzing, or querying case and process-related documents using free-form questions.
- Dashboard Generation: Define dashboard content by describing what should be displayed in natural language, allowing for rapid prototyping and iteration.
More information can be found in the Using Generative AI Features section.
Reference Documentation
Use the links above or the sections on the left-hand side to explore each of the topics above in more detail. If you're already familiar with the basics, refer to the reference documentation for configuration details and advanced usage.