To understand what a rule engine is and why it was designed, we need to go back to the 1970s. In the world of research, the notion of object language appeared at that time and it was necessary to model the whole business and to implement it through this notion of modeling around object language.
In the 1990s, this way of programming spread through many languages such as C++ or java/dotnet for information systems. It soon became apparent that this way of doing things, although it had many advantages over procedural programming, had some shortcomings.
In the case of numerous, complex and often changing business rules, object modeling had many limitations and above all the business code was distributed across all objects and made maintainability complex and costly. The object world had an answer for all its design problems: design pattern in business modeling principles. As far as the business is concerned, it is a question of putting in one place and thus facilitating evolution and maintainability (pattern strategy).
But the performance and completeness of the computer code produced then always depends on the dexterity of the developer.
So a solution had to be found for this.
In 1983, a young American student (Charles Forgy) defended his thesis on the development of an algorithm to represent and execute business rules: the Rete algorithm. (network in Latin).
This algorithm :
1) Allows you to describe the rules (declarative programming) in the form of "if facts are true then I do actions (add, modify or delete a fact). In fact, the algorithm will allow you to infer (reason) on facts to continue your reasoning. This description is done in a clean rule description language.
2) Implement an execution engine that will read the rules, represent these rules in a graph called "Rete", and allow to browse this graph to know which rules to execute according to input data and data produced by the rules.
What is important to understand is that this algorithm has been designed to solve the following problems:
- Decouple the writing of rules from their execution. Indeed, end-users describe their business rules one after the other and it is the chaining of these rules that carries out the business. It was therefore necessary to have a type of tool that would allow this way of doing things to be monitored.
- Business rules can change often. It is therefore necessary to be able to quickly change rules and execute them taking into account the changes.
This approach allows a business analyst to describe and implement rules without worrying about scheduling or the performance of the code produced, as was the case for the programmer in the previous case.
Many rule engines stop there and the rules are verified one after the other. This is very inefficient when there are many rules. These tools allow you to organize the rules to overcome this problem.
But performance degrades quickly and to overcome this, Rete's algorithm goes further:
- It will index the rules and therefore at each start up each modification will be taken into account. This tree is stored in random access memory. Depending on the input data and the data modified by the rules, the algorithm will browse this tree. Obviously, there are indexes, hashmaps that are created by the algorithm to quickly find the potentially executable rule(s).
This allows the tools implementing this algorithm :
- To have performances that remain constant (or almost constant) no matter how many rules are declared (the start time can change).
- To have Excellent execution performance (+ 50k rules per second). This is true if the rules are well written and the data model is well business oriented (this is the same problem as for relational databases).
In our feedbacks, for example a loyalty system with a basket of 100 products and 10-20 business operations, we have a few hundred rules running (depending on the rules) and all in less than 300-400 ms from end to end maximum. It obviously depends on how the rules have been written and the hardware on which the rules run (virtualized environments don't always have very powerful processors).
It is important to understand that a rule engine such as drools runs entirely in RAM and therefore makes full use of the processor power through this indexing of the management rules, because there is no waiting for availability of tier service data as there is when waiting for data from a database.
It is important to keep these facts in mind because choosing a tool such as drools, even if it implies changes in programming paradigm (procedural to declarative) and technical constraints, allows you to perpetuate your information system by investing in a tool that
- Has very good execution performance no matter (almost) how many rules are declared.
- Allows you to decouple management rules from the rest of the information system.
- Allows rules to be reused by other systems (subject to technical constraints such as the existence of a service rest for a mainframe).
- Allows more business profiles to focus on business rules. And we know that such a profile stays longer in a company.
- As part of an open source tool such as drools to be independent of an editor
The drool ruler engine appeared in the years 2003/2004. It is open source and its use is regulated by the Apache Open Source Foundation license in version 2. This license allows a professional use without fear of possible lawsuits on the product code (business rules) to use drools.
It is necessary to be careful because many open source tools such as noSQL MongoDB or ElasticSearch databases change their distribution license and make it less open.
Drools is maintained by RedHat/jboss (the publisher of the Redhat Linux distribution) which is a subsidiary of IBM. So far, this acquisition has only been good for Drools. A large community gravitates around drools and remote very quickly all the anomalies and proposes patches.
Originally, drools had only one programming interface (API) and the rules were written in a specific language: drl = drools rule language.
It was necessary to program from computer tools.
Since then, the community has equipped itself with a web, multi-user and multi-project interface that allows to manage the whole life cycle of the rules from their writing to the construction of an executable binary by the rule engine.
It is a tool that allows to develop and fine-tune rules but :
- The deployment of the solution has to be built each time as high availability.
- There is no trace tool to see what happens during rule execution, which rules have been executed, what data is present etc... You have to write it to yourself each time.
- User rights management is possible but needs to be configured.
It is an Open source tool which is a "Swiss Army knife" that you have to adapt to your needs.
For 14 years, we have been involved in drools projects: training, support and/or implementation of drools rules. This has allowed us to accompany our customers not only in the implementation of the rules but also in its implementation in the technical infrastructure of our customers.
From this long project experience, we have developed our platform which allows us to start a drools project with everything in it very quickly:
- User management
- Storage of traces
- Deployment of new rule packages to the execution engines
- Added a new runtime engine that configures itself
- All this from docker containers and/or via devops scripts Ansible
Our platform in its current form is in production at two customers, one of which is using Openshift./p>
Otherwise, most of our customers use parts of our platform, especially for traces of executed rules.
Our tool therefore offers
- A load balancer that will direct traffic to the right runtime engine,
- An administration interface that allows to manage users, projects, deployment, adding
execution engine and viewing execution traces.
- We take over the web interface of the community and we have integrated it
By default, on the development workstation, all this runs in dock containers.
Our tool is open source with the same License as drools itself and the source code is available on GitHub.
We put all our developments there.
The objective is to propose mid-2021 a hosted offer to allow a fast availability according to the customer's needs.
Our offer will come in several forms:
- An offer for the trainings that we propose: we are rewriting our trainings to be able to run them on our hosted offer with a dedicated graphical interface
- A mutualized offer but separating customer data from each other
- A dedicated offer on the main cloud offers on the market. The customer will be on his servers or on servers that we will take for him.
The offer is currently being finalized and we are focusing on the first two points with a shared offer that we will host in our (small) Datacenter in our premises in the first instance.
The goal is to facilitate access to training and project launch. The user will be able to quickly and easily have all the elements at his disposal to develop the rules and to execute them. He will be able to access his execution engine via a secure internet address (url).
The product as it is has been developed over the last 5 years over the course of time.
As for the brakes on the adoption of the ruler engine, they are numerous but we will describe one of the biggest problems we are currently encountering.
Computing has always been sensitive to modes/currents such as client/server, object modeling, multi-layer, micro-service, serverless, etc.
Thanks to open source, everything is going faster and technologies are advancing very fast. There was a time when the concept of cobol was changed every 20/30 years
We focus on implementing complex business rules in back-office or middle-office applications.
Many people think that the solution always comes from a good choice of technical architecture and the simplification that is often made is to choose a micro-service "cloud" technical architecture and that therefore the scalability will be done without any worries and with extraordinary results.
we shall skip over the fact that this considerably enriches the companies that offer this (the cloud).
These new approaches lead developers to cut business processing into multiple calls to a rule engine or a program implementing business rules.
A simple call to find out which business offerings are adapted to the customer and based on their history will require 5-15 calls to finer services (or micro services). We are talking about experience feedback and not theoretical cases. This requires from those who make these small services very important performances, so all these calls in the end cost a lot in performance.
The result of all this is that instead of focusing on rich business development all the teams are just solving technical problems.
We haven't talked about drools in detail here but this tool allows component management and therefore from these to make a single service that can provide all services in one call. This requires an adapted software architecture to do so.
We made prototypes for these customers where we could have correct performances.
In the 5-15 services example, our implementation was much better on a full call versus the 15 services call and only the unit had the same performance.
This implementation of the 5-15 services has as a consequence a limitation of its use to have correct performance.
But for those who need to implement complex back-office rules and offer it to a web site or partners and who are aware that they have to do otherwise by using an efficient algorithm, drools is an excellent solution.