Streaming Apiman events to external consumers via Kafka
Client
BPL's client in this case study have a leading no-code/low-code business rules and integration platform. Apiman is an integral part of their offering, enabling APIs built on the platform to be exposed with security, metrics, rate limiting, and other classic API Management functionality. The client's platform makes the process as seamless as possible, providing automation and fast-paths for the user where possible.
Background
Apiman has a variety of business events that are useful for external systems, where they can be used to drive business and technical processes that are outside of Apiman's scope.
Within Apiman, events are used for generating notifications, emails, and a range of other future tasks.
Externally, Apiman events allow loosely coupled functional composition with other systems. In a typical setup, Apiman's events are pushed into an event-streaming platform such as Apache Kafka, which other interested systems subscribe to.
As a concrete example, Apiman's events for a 'new API signup request requiring approval' are consumed by an external business rules platform that implements a due diligence workflow, and invokes Apiman’s APIs to either approve or reject the signup request.
Some further example potential use-cases:
- Storing all Apiman events in Elasticsearch for audit and search purposes.
- Approving API signup requests.
- Triggering due diligence workflows.
- Using a centralised notification system.
Requirements
- Retrofit Apiman with an internal event creation and propagation mechanism.
- Automatically propagate Apiman events into an external event distribution platform(s), such as Kafka.
- Ensure Apiman events contain enough data for external systems to act on the events.
- Minimise code churn within Apiman.
- Compatibility with industry-standard cloud-events format for events.
- Ensure transactionality is maintained within the Apiman application, where possible.
- Avoid dual-write problem.
- No platform-specific event distribution platform logic in Apiman.
Project design & implementation
After a full assessment, we architected a solution that balanced various technical and business priorities, including using well-proven technologies, loose coupling, modularity, upgradability/maintainability, and functional requirements.
As shown above, we chose the Transactional Outbox pattern for exposing Apiman’s events to external consumers in a noninvasive way.
Within Apiman, events are implemented using simple Java classes with standardised metadata following the CNCF Cloud Events standard (subject, dates, version, etc).
Events are first dispatched internally using the Java EE / Jakarta EE CDI Events bus (transactionally). Apiman's own consumers, such as the notifications system, listen to events to run a variety of business logic.
To give external consumers access to these events, a special listener transactionally writes all internal Apiman events into an ‘outbox’ table in its relational database, providing ACID guarantees and strong consistency internally. Change data capture (CDC) software, such as Debezium, reads from the 'outbox' table and reliably writes the events into Kafka (or another external event sink).
This solution benefits from delegating the complexity of multi-datastore synchronisation to CDC platforms that specialise in this task. Furthermore, it is completely decoupled, with no event consumer-specific logic needed in Apiman.
Events are versioned, in case they need to be modified or migrated in future.
Outcome
The customer plans to create and integrate Apiman events into their business rules engine, potentially to drive billing and monetisation.
There is plenty of latitude for introducing additional events using the framework established, and it's easy to envision many powerful integrations this pattern enables.
This was contributed to the Apiman upstream and is released in Apiman 3.0.0.Final.