Jump to content

Practical DevOps for Big Data/Maritime Operations

From Wikibooks, open books for an open world

Use Case Description

[edit | edit source]

Posidonia Operations is an Integrated Port Operation Management System highly customizable that allows a port to optimize its maritime operational activities related to the flow of vessels in the port service area, integrating all the relevant stakeholders and computer systems.
In technical terms, Posidonia Operations is a real-time and data intensive platform able to connect to AIS (Automatic Identification System), VTS (Vessel Traffic System) or radar, and automatically detect vessel operational events like port arrival, berthing, unberthing, bunkering operations, tugging, etc.

Posidonia Operations is a commercial software solution that is currently tracking maritime traffic in Spain, Italy, Portugal, Morocco and Tunisia, thus providing service to different port authorities and terminals.

The goals of creating this case study are adopting a more structured development policy (DevOps), reducing development/deployment costs and improve the quality of our software development process.

In the use case, the following scenarios are considered: deployment of Posidonia Operations on the cloud considering different parameters, support of different vessel traffic intensities, add new business rules (high CPU demand), run simulation scenario to evaluate performance and quality metrics.

Business goals

[edit | edit source]

Three main business goals have been identified for the Posidonia Operations use case.

  • Lower deployment and operational costs.

Posidonia Operations is offered in two deployment and operational modes: on-premises and on a virtual private cloud. When on-premises, having a methodology and tools to ease the deployment process will result in a shortened time to production, thus saving costs and resources. In the case of a virtual private cloud deployment, it is expected that the monitoring, analysis and iterative enhancement of our current solution will result in better hardware requirements specifications, which in the end are translated into lower operational costs.

  • Lower development costs

Posidonia Operations is defined as a “glocal” solution for maritime operations. By “glocal” we mean that it offers a global solution for maritime traffic processing and analysis that can be configured, customized and integrated according to local requirements. In addition, the solution operates in real-time making tasks like testing, integration, releasing, etc. more critical. By the application of the methodology explained in the book, these tasks are expected to be improved in the development process, thus resulting in shortened development lifecycles and lower development costs.

  • Improve the quality of service

Several quality and performance metrics have been considered of interest for the Posidonia Operations use case. Monitoring, predictive analysis or ensure reliability between successive versions will end in an iterative enhancement of the quality of service to our current customers.

Use Case Architecture

[edit | edit source]

Posidonia Operations is an integrated port operations management system. Its mission consists on “glocally” monitor vessels’ positions in real time to improve and automatize port authorities operations. The below image shows the general architecture of Posidonia Operations. The architecture is based on independent Java processes that communicate with each other by means of a middleware layer that gives support to a Message Queue, a Publication and Subscription API and a set of Topics to exchange data among components.

Figure: Posidonia Operations general architecture

An overview of the main components for Posidonia Operations would be:

  • Vessels in the service area of a port send AIS messages that include their location and other metadata to a central station. (This is out of the scope of the architecture diagram)
  • An AIS Receiver (a spout) receives those messages and emits them through a streaming channel (usually a TCP connection)
  • The AIS Parser (a bolt) is connected to the streaming channel, parses the AIS messages into a middleware topic and publishes it to a Message Queue.
  • Other components (bolts) subscribe to the Message Queue to receive messages for further processing. As an example, the Complex Event Processing engine receives AIS messages in order to detect patterns and emit events to a different Message Queue.
  • The Posidonia Operation client "Web" allows to the employees of the port to have a visual tool that allows them to control the location of the vessels in real time. This website shows on a map the different vessels that are within the area of influence of a port, with a list of operations that are happening.
Figure: Posidonia Operations web client

Use Case Scenarios

[edit | edit source]

There exists different usual scenarios where Posidonia Operations development lifecycle can benefit from the knowledge explained in this book. These scenarios are a small subset of the possible ones but are representative of interesting situations and are based on our current experience delivering a data intensive application to port authorities and terminals.

Deployment Scenario

[edit | edit source]

Currently Posidonia Operations can be deployed in two fashions:

  • On-premises: The port authority provides its own infrastructure and the platform is deployed on Linux virtual machines
  • In the cloud: Posidonia Operations is also offered as a SaaS for port terminals. In this case, we use the Amazon Virtual Private Cloud (VPC) to deploy an instance of Posidonia Operations that gives support to different port terminals.

Apart from this, configuration varies depending on the deployment environment:

  • Hardware requirements (number of nodes, CPU, RAM, DISK) to deploy of Posidonia Operations on each port is based on the team experience. For each deployment, the hardware requirements are calculated manually by engineers, considering the estimation of the number of vessels and the complexity of the rules applies for each message to be analysed. DICE tools can help to tune automatically the appropriate hardware requirements for each deployment.
  • Posidonia Operations deployment and configuration is done by a system administrator and a developer and it varies depending on the port authority. Although deployment and configuration is documented, the DICE tools can help to adopt a DevOps approach, where deployment and configuration can be modelled in order not only to better understand the system by different stakeholders, but also to automate some tasks.
  • A DevOps approach can help to provide also test and simulation environments that will improve our development lifecycle.

Support vessels traffic increase for a given port

[edit | edit source]

Posidonia Operations core functionality is based on analysing a real-time stream of messages that represent vessels positions to detect and emit events that occur on the real world (a berth, an anchorage, a bunkering, etc.).
Different factors can make the marine traffic of a port increase (or decrease), namely:

  • Weather conditions
  • Time of the day
  • Season of the year
  • Current port occupancy
  • etc.

This means that the number of messages per second to be analysed is variable and can affect performance and reliability of the events detected if the system is not able to process the streaming data as it arrives. When this is not possible, messages are queued and this situation has to be avoided.
We currently have tools to increase the speed of the streaming data to validate the behaviour of the system in a test environment. However, the process of validating and tuning the system for a traffic increase is a tedious and time consuming process where DICE tools can help to improve our current solution.

Add new business rules (CEP rules) for different ports

[edit | edit source]

Analysis of the streaming data is done by a Complex Event Processing engine. This engine can be considered as a “pattern matcher”. For each vessel position that arrives, it computes different conditions, that when satisfied produce an event.
The number of rules (computation) to be applied to each message can affect the overall performance of the system. Actually, the number and implementation of rules vary from one deployment to other.
DICE tools can help on different quality and performance metrics, simulation and predictive analysis, optimization, etc. in order to tune our current solution.

Give support to another port in the cloud instance of Posidonia Operations

[edit | edit source]

Give support to another port (or terminal) in the cloud instance of Posidonia Operations usually means:

  • Increase the streaming speed (more messages per second)
  • Increase on computation (more CEP rules executed per second)
  • Deployment and configuration of new artefacts and/or nodes

In this case, DICE tools can help improve Posidonia Operations also on estimating the monetary cost of introducing a new port on the cloud instance.

Run a simulation to validate performance and quality metrics among versions

[edit | edit source]

CEP rules (business rules) evolve from one version of Posidonia Operations to another. That means that performance and quality of the overall solution could be affected by this situation among different versions. Some examples of validations we currently do (manually):

  • Performance: New version of CEP rules don’t introduce a performance penalty on the system
  • Performance: New version of CEP rules don’t produce queues
  • Reliability: New version of CEP rules provide the same output as prior version (they both detect the same events)

One of the main issues of the current situation is that measuring the performance (system performance and quality of the data provided by the application) is done manually and it’s very costly to obtain an objective quantification. By using DICE simulation tools, performance and reliability metrics can be predicted for different environment configurations, thus ensuring high quality versions and non-regression.

DICE Tools

[edit | edit source]

The DICE Framework is composed of several tools (DICE Tools): the DICE IDE, the DICE/UML profile, the Deployment Design (DICER) and Deployment service provide the minimal toolkit to create and release a DICE application. In order to validate the quality of the application the framework encompasses tools covering a broad range of activities such as simulation, optimization, verification, monitoring, anomaly detection, trace checking, iterative enhancement, quality testing, configuration optimization, fault injection, and repository management. Some of the tools are design-focused, others are runtime-oriented. Finally, some have both design and runtime aspects and are used in several stages of the lifecycle development process.

Some of the DICE Tools have been used during the use case, in order to achieve the business goals set for the use case. The follow table summarizes the tools used during the use case and the benefits obtained from its use.

DICE Tools Benefit for our Use Case
DICE IDE The DICE IDE integrates all the tools of the proposed platform and gives support to the DICE methodology. The IDE is an integrated development environment tool, based on Eclipse, for MDE where a designer can create models to describe data-intensive applications and their underpinning technology stack. The DICE IDE integrates the execution of the different tools, in order to minimize learning curves and simplify adoption.
DICER The DICER tool allows us to generate the equivalent TOSCA Blueprint (deployment recipe) from the Posidonia use case DDSM created using the DICE IDE. This Blueprint is used by the deployment service to automatically deploy the Posidonia use case. With this tool you can obtain diferent blueprints for different configurations of the use case.
Deployment Service Deploy the Posidonia Operations manually is quite time and cost consuming. Deployment Service is able to deploy on the cloud a configuration of the Posidonia Operations use case in few minutes. The last version of the deployment service works over flexiant Cloud orchestrator and Amazon AWS. To deploy a solution using the deployment service it is needed to provide as an input the TOSCA Blueprint of it. This blueprint is obtained using the DICER tool.

Using Deployment service, the deployment is faster (only few minutes), you can deploy different configurations of your solution, and it does not require system administrator experts because the deployment is automatized.

Monitoring Platform This tool allows us to obtain metrics in real time for a running instance of the Posidonia Operations: Generic hardware performance metrics (CPU and memory consumption, disk usage, etc.) and Specific use case metrics, such as, number of events detected, location of the events, execution time per rule, messages per second, etc.

The reported results, provided by the monitoring platform, allow the architect and developers to eventually update the DDSM and/or DPIM models to achieve a better performance. Another interesting point is that the Monitoring Tools facilitates the integration with the Anomaly Detection Tool and the Trace Checking Tool, because both tools use the information stored in the monitoring as a data input.

Fault Injection DICE Fault Injection Tool (FIT) is used to generate faults within virtual machines. In Posidonia use case, this tool is useful to check how the CEP component behaves against a high CPU load. To observe the behaviour of the system, we use the Monitoring Tool that contains a specific visualization for the system load.

Although the Fault Injection Tools can launch other types of faults, for Posidonia use case, only the High CPU fault is relevant to evaluate the response of the system to this situation. It’s important validate that the System continues working and no event is lost when a high load happens.

Anomally detection

Anomaly Detection Tool allows us to validate that the system works as is expected with the current rules and with the addition of new ones. That is, no events are lost, no false positives are given, the execution time is kept within a reasonable range or the order of events detected is adequate.

Simulation Tool The vessels traffic increase for a given port directly affects the Posidonia Operations performance if the system is not properly sized. We have worked with the Simulation Tool and the Monitoring tool to define the dimension of the system and to monitor it in real time.

Analysis of the streaming data is done by a Complex Event Processing engine. This engine can be considered as a “pattern matcher”, for each vessel position that arrives it computes different conditions, that when satisfied produce an event. The number of rules (computation) to be applied to each message can affect the overall performance of the system. Actually, the number and implementation of rules vary from one deployment to other. In recent months, a significant effort has been made to improve the quality and the validation of the use case

Conclusion

[edit | edit source]

We affirm that the use of the DICE methodology and the DICE framework, are very useful in the Maritime Operations use case. It provides a productivity gain when developing the use case. The results obtained from applying the DICE Tools to the use case can be summarized in:

  • Assessment of the impact in performance after changes in software or conditions. We can predict at design time the impact of changes in the software (number of rules, number of CEPs) and/or conditions (input message rate “Simulation Tool”, CPU overloads “Fault Injection Tool”). Moreover, bottleneck and anomalies can be detected using the Anomaly Detection Tool.
  • Increase the Quality of the system. We can detect punctual performance problems with the use of the Anomaly Detection Tool and we can detect errors in the CEP component. Detection of loss rules and false rules detection and delays in the detection of the rules compared to the real time in which the event happened.
  • Automatic extraction of relevant KPI. Easy computation of application execution metrics (generic hardware metrics such as CPU, memory consumption, disk access, etc)and easy computation of application-specific metrics which have to do with computational cost of rules, number of messages processed per second, location of events on a map , quantification of application performance in terms of percentage of port events correctly detected by the CEP(s), etc.
  • Automation of deployment. Much faster deployment and possibility of deployment in different cloud providers.