Project 3

Project Name: 
R.A.I.D. - Robotic Artificial Intelligence Developers
Process Model
Modeling Tool: 
bpmn.io
BPMN 2.0 Model (.bpmn): 
BPMN 2.0 Model (.pdf): 
Effort Process Model: 
12hrs
Workflow Implementation
Execution Tool: 
JIRA
Test Report: 
Our workflow has been designed with automation in mind, leaving only a relatively simple task to the users to do themselves. The rest of our workflow tasks are supported by automatization scripts, helping us achieve an efficient process execution. We have prepared several draft design documents, a number of which are chosen at random and presented to the users, who have the responsibility of making sure that the code in the documents is valid, meaning that the users take on the role of a "reviewer". The first interaction with our workflow implementation the users experience is a web form, where they can register as a reviewer. This is a simple web application written in JavaScript and hosted on a NodeJS server. The only allowed input is the TU "MatrikelNummer". Both for security and users' convenience the input is validated (cleint and server-side) to make sure that it is indeed a "MatrikelNummer" which was entered in the form. In the back-end, the node application parses the user input and generates a student e-mail address from the "MatrikelNummer". Simultaneously, a POST request is sent to the JIRA REST API, assigning a new issue to the user (The users are identified by their "MatrikelNummers") and setting the status of the issue to "ToDo" in JIRA. This is important, as this allows us to track the exact time when a specific user has started the reviewing task. The user then receives an automated e-mail notification informing them that a review task is ready. The e-mail contains a link to another Node web application. In this application, the users are presented with 4 design draft documents, each of them containing a programming code snippet. The user's role is then to mark all code that is valid, leaving any non-working/nonsense code unmarked. The application then calculates the quality code (Represented by how much % of the code is marked as valid). The user can then decide to approve the code, ending their task, or if the quality of the code is too low, rejecting the code and getting a new set of draft designs. Here we measure some additional important KPIs. We track how many times a user rejected the drafts until they approved the code. We can also track if the user analyzed code in all the documents, or was "lazy" and ignored some draft documents. After the user is done with the reviewing task, another request is sent to the JIRA REST API, settting the issue status of the user to "DONE". We now have a starting and ending time point of the non-automated task. The rest of the workflow is fully automated. In the next steps we calculate the quality level of the code, depending on whether the user kept rejecting until the code was 100% bug free or got tired of reviewing draft documents and submitted low quality code. In the last step, we regard any code, which does not meet certain quality criteria as non-working and mark the process as "failed". If on the other hand, the code is almost bug-free, the process is marked a success. This helps us to determine another KPI (Number of process completions/fails). Every component is hosted on a separate VM. The infrastructure consists of multiple Ubuntu 18.04 VMs. One of the VMs has Apache Webserver with proxy module enabled installed and is acting as a sort of a gateway (Reverse proxy) to all node web-applications (registration, task review, api calls) and also JIRA. The web-applications are written mostly in JavaScript and hosted on several node servers (1 node server / 1 application).