AI-Assisted Modeling
Introduction
Flowable Design (as pf version 2025.2.02+) introduces AI-powered model interaction through a conversational user interface. When generative AI is enabled, a new option appears at the bottom of the property panel. This allows to build and modify models using natural language, using both text and voice input, rather than traditional drag-and-drop operations.
Supported Model Types
The AI assistant can work with:
- BPMN
- CMMN
- Agent
- Channel
- Data objects
- DMN
- Event
- Forms
- Services
- Data dictionary
Getting Started
- Enable generative AI in your Flowable Design settings
- Look for the AI interaction option at the bottom of the property panel
- Click to open the chat interface
- Describe what you want to create or modify using text or voice commands
This feature is marked as experimental. Users may encounter issues, particularly when switching between different LLM models.
Recommendation: Use this as a technology preview for exploration and testing rather than production deployments. We will continue improving stability and capabilities in future releases as both the feature and underlying LLM technology mature.
Model Chat
The chat in a support model allows to give direct instructions (e.g. add a subprocess with a script task) or more declarative commands (e.g. add a review and approval step, typical for the insurance industry).
The screenshot below shows how this looks like for BPMN:

Scripting Tasks
For BPMN and CMMN Script Tasks the AI chat functionality can be used to help with creating scripts. The AI button becomes visible in the top-right corner when hovering the script area, as shown here:

The chat interface uses a different paradigm here, where suggested changes need to be accepted. Note that the generated scripts will try to use the built-in flw functions to have script portability:

Limitations
Currently, only openAI LLM models are supported, due to needing 'structured output' for the LLM responses (more details, see https://platform.openai.com/docs/guides/structured-outputs). Claude models have added this feature recently as beta and will be supported in the future.
In our experimentations, results varied significantly depending on the LLM model being used. Here are some of our results:
| Model | Results | Speed |
|---|---|---|
| gpt-4.1-mini | ++++ | ++++ |
| gpt-4.1 | ++++ | +++ |
| gpt-4.1-nano | ++ | +++++ |
| gpt-4o | +++ | ++ |
| gpt-5 | ++ | + |
| gpt-5-1 | ++ | + |
| gpt-5-nano | ++ | ++++ |
| gpt-5-mini | ++ | ++++ |
gpt-4.1-mini currently gives in general the best results balanced with acceptable speed (however your tests might of course yield different results).
gpt-5.1-codex(-mini) are currently not possible as under the hood different API's are used in comparison with other models.