Optimisation of hybrid energy systems for maritime vessels

: The decarbonisation agenda in maritime transport requires that asset owners and operators adopt greener technologies within their existing and new vessels. The primary drivers within this agenda relate to improved environmental metrics, efficient energy performance, and improved asset management. However, the integration of new technologies always presents technical and financial risks. Here, utilising energy and environmental monitoring from real vessels, the authors propose an energy system optimisation architecture, hybrid fusion energy management system (HyFES), that optimises the key performance indicators of energy performance, reduction of diesel engine nitrogen oxide (NOx), and particulate matter (PM), and prognostic state of health assessment of energy storage technologies. Using state of the art machine-learning techniques, the authors are able to determine the on-board lithium-ion and lead acid batteries' state of health with accuracy > 8 and 4%, respectively. Dependent on the mode of operation, optimisation of energy performance indicates fuel saving of between 70 and 80% for the vessel operator. Future research will focus on the integration of more assets into the optimisation architecture and increased vessel journey use cases.


Introduction
Accounting for 10% of global marine exports, the UK is the fourthlargest shipbuilder in Europe. The sector employs nearly 90,000 people and adds $19 billion per year to GDP [1]. In recent years, competitive global markets and increasingly stringent environmental regulations called for changes in the maritime sector, for example, the UK Marine Industries Alliance (MIA) identified that it is crucial for the maritime industry to develop energy-efficient, environmental-friendly and cost-effective vessels [2]. It is worth noting, however, that the pursuit to increase fuelefficiency can be traced back to the 1980s. One of the methods to achieve such an objective is to directly reduce fuel usage for propulsion purposes, instead implementing a system that simultaneously utilises fuel and electricity to provide power. For example, in 1986, a hybrid diesel-electric power system was introduced on the HMS Queen Elizabeth II. Since then, hybrid vessels have been increasingly deployed in the marine sector [3,4]. Typically, a hybrid vessel is powered by diesel-fuel engine and lithium-ion battery, accompanied by a lead-acid battery stand-by power source. Compared to the fuel-only vessels, a hybrid power system is capable of offer significant savings on emission output, fuel usage, and hence fuel costs, while also reducing through-life maintenance costs as a power system [4].
Diesel engines fitted to marine vessels often have an average life extending beyond 100,000 h. This lifespan can be affected by factors such as internal friction and the intensity of usage [5]; although regular inspections can potentially aid trouble-shooting and be used to identify specific component wear downs, by opening the assemblies of crucial parts of the engine, the wear of the engine may further increase and lead to shorter engine life [5]. Developing mechanisms to accurately predict the lifecycles of diesel engines can help the asset owner to identify appropriate time for inspection, and plan for necessary maintenance, reducing the possibility of eliciting damage to the engine assets through regular but unnecessary human interventions.
By contrast, storage technologies such as lithium-ion batteries have shorter life cycles. Although the battery is rechargeable, cellaging caused by irreversible chemical reactions during usage leads to reduction in capacity overtime [6]. Hence, timely replacement of lithium-ion battery is required to increase operational safety. Similarly, lead-acid batteries also have capacity degradation, even though it is not actively used to provide power, aging may still occur and endanger both the battery and the vessel itself [6]. Due to these asset features, the hybrid power system and relevant assets require improved health management, which can accurately predict asset health status, prevent significant failure, and hence, increase vessel reliability and reduce through-life maintenance costs.
Research to-date in predictive, intelligent, asset management has predominately focused on single-typed critical assets [5]. Although yielding important insights for individual asset lifecycles and health management, the single asset approach has limited implications for maintaining more complex entities such as the integrated propulsion system on a hybrid vessel. Such a hybrid propulsion system is typically composed of several distinct power assets: a diesel engine, a battery system used for propulsion (typically Li-ion), and a battery system used to provide emergency back-up/hotel services (typically Lead-acid), that are each engineered differently and feature distinct mechanical or electrical structures. Our paper presents a holistic optimisation strategy that encompasses three key performance indicators (KPIs) (i) efficient energy performance, (ii) reduced environmental outputs, and (iii) predictive asset lifetime prediction. The analysis uses energy performance and environmental data from a MTU 10 V M72 Diesel Engine within a Thames Clipper vessel. The structure of this paper is as follows: Section 2 presents the case study used and describe the methodology for energy performance optimisation. Section 3 presents the proposed architecture for the hybrid fusion energy management system (HyFES) and the methodology for system analysis. Next, Section 4 describes the machine-learning methods used to predict LiOn and lead-acid battery state of health. Finally, Section 5 presents the experimental results, while Section 6 summarises the main findings.

Case study: cyclone clipper
In major cities such as London, there is increasing evidence available to suggest that particulate matter (PM) and to some extent nitrogen oxides (NOx) concentrations have not decreased with expectations across many locations. Hence in an attempt to not only prevent the increase in emissions, but also to reduce them, zero carbon zone policies are being extended to include vessels on major rivers. The vessel studied is the MBNA Cyclone Thames Clipper. The specification of the clipper's engine is outlined in Table 1.
The data collected originates from operational runs of the Thames Clippers' vessel Cyclone Clipper fitted with a MTU 10V 2000 M72 diesel engine are summarised in Table 1. The vessel is operated between Eastern and Central London, with data gathered from two test runs.
Diesel engines are known to emit a higher quantity of pollutant in terms of NOx and PM. Hence, to fully realise the potential of a hybrid power train, the power management function must be carefully designed such that it minimises fuel as well as emissions [4]. However, minimising fuel does not necessarily mean that emissions are also minimised. For example, in terms of NOx, the higher the combustion temperature, the higher the NOx emission, which represents some uncorrelation to fuel consumption.
Employing a hybrid power configuration with an electric transmission allows for the removal of the gearbox and hence it implies that the engine will be operating as a generator. Hence, to allow for minimum fuel consumption, the engine must operate along the most efficient envelope curve. The mathematical model of the diesel engine performance developed is based on a least square fit of a polynomial expression and is summarised in Fig. 1 as the red and blue curves. The figure illustrates the Power vs Engine Speed as well as the Torque vs Engine Speed. Each parameter is described by a second-order polynomial of the form: where A, B, and C are the polynomial regression coefficients determined using MATLAB.
In order to develop the performance monitoring and advisory system as to where the engine would be operated along the torquespeed/power-speed curve, the fuel consumption must be mapped as a function of both power and speed. Consequently, the engine fuel consumption has been estimated using a third-order polynomial, which involves nine coefficients calculated based on 25 engine data points derived from the Performance Diagram document no. XZ51200100002 Rev: E. The function estimates the fuel consumption for various Torque-Speed combinations, and it also allows for calculation of the locus of point of minimum fuel consumption at a given power in a straightforward manner.
where z represents fuel in [g/kWh], x engine speed in [rpm], and y power in [kW]. Based on the proposed equation which represented fuel consumption by engine speed, engine power, and nine coefficients, the contour map in Fig. 2 has been plotted along with the 3D distribution of the function in Fig. 3.
At a constant power level such as generator mode operation, if engine speed and engine torque are such values that fuel consumption is minimised, the engine is said to be operating at the optimum working point. Such a point corresponds to the trough on the contour map in Fig. 2. If the engine power varies from 0 to 100%, corresponding to a power from 100 kW (power at idle speed) to 900 kW (max. power achieved at 2250 rpm), the optimum working points will form a single line. The line is illustrated as a red curve in Fig. 2. As a consequence, it can be concluded that the engine must operate along the red curve in order to achieve minimum fuel consumption. The NoX emissions have been calculated using the polynomial developed in [7] and is shown in Fig. 4.
It is clear from Fig. 4 that the NOx emission is not fully dependant on power, in fact a greater impact on NOx emissions is the exhaust temperature. The lower the exhaust temperature, the lower the NOx emissions. In turn, exhaust temperature is dependent on operational time at a particular power (engine torque and speed) setting hence the optimisation problem turns into a complicated algorithm that will later have repercussions on the remaining useful life of the battery.
The power management function design has been addressed in various papers across literature and can be roughly classified into three main approaches; (1) intelligent control techniques using control rules/fuzzy logic/neural network for estimation and control algorithm development, (2) static optimisation method (steady state) where the electric power is translated into an equivalent amount of fossil fuel power and then the optimisation scheme figures out proper split between the two energy sources using steady-state efficiency map, and (3) dynamic optimisation approaches, where the optimisation is carried out with respect to a time horizon, rather than for an instance in time i.e. the system is trying to predict required power; however, such an approach is computationally intensive [8]. Here, the static optimisation method has been employed for both its point-wise optimisation nature that fits the test run data as well as it is potential of simultaneously minimising for fuel economy and emissions.

Hybrid fusion energy management system (HyFES)
This section explains the approach taken towards the problem posed by the integration of a hybrid fusion energy system. It was decided that the common fixed parameter in this system is the power. The power requirements will not greatly change if the current system is replaced with a hybrid system, the vessel will still have essentially the same: hull shape, mass, and air resistance so for a given run it will require the same total energy to physically move the vessel around. The layout of the powertrain is assumed to take the form of that shown in Fig. 5.
Assumptions within this energy analysis model include: (1) The power output latency of both the engine and battery is not accounted for in the model. It is assumed that any level of power demand could be provided instantaneously from either source, (2) Due to the rated power output of the battery, as obtained from data sheets, the battery can provide the maximum observed power demand (750 kW), therefore, no maximum power constraints were placed upon the battery. If the battery specification changes, there may be a need for further battery instruction to keep the battery within rated working limits. The HyFES optimisation model is illustrated in Fig. 6, demonstrating how environmental metrics, energy performance, and asset health (battery remaining useful of life) can be integrated.

State of health predictions for battery storage technologies
Within this section, an introduction to the machine-learning methods applied to the LiOn and lead-acid battery analysis is provided. With respect to the LiOn battery, we apply our LiOn lifecycle data to a supervised learning algorithm: relevance vector machine (RVM) [9]. Representing a Bayesian treatment of support vector machines (SVM), RVM method allows the use of arbitrary kernel functions and generates probabilistic predictions. This section will discuss the underlying principles of the RVM method.
Suppose there is a training dataset with input vectors {X n } n = 1 N and target vectors {T n } n = 1 N , the two types of vectors can represent either values or classifications labels. The following sections will use RVM for battery prognostics, targets T n are considered as real values.
In supervised learning, a model y(x) is often assumed to make predictions, y(x) is a sum of ε linearly weighted basis functions. Formally, the model is defined as: In the model, ϕ(X) stands for the basis function for input vectors x, and ϕ(X) = (ϕ 1 (x), ϕ 2 (x),...,ϕ M (x)) T . w = (ω 1 , ω 2 ,...,ω m ) T is a vector of the adjustable weights for each ith basis function. The basis for SVM can be re-written as: Kernel functions are also used in RVM. In RVM, we assume the target values {T n } n = 1 N are samples from the above y(x) expression: t n = y(x n ; w) + ε n Specifically, n represents an additive noise factor which is assumed to follow standard normal distribution. If t n is independent, then the maximum likelihood function of obtaining the entire training dataset is: The above maximum likelihood function may lead to over-fitting because the number of parameters equals the number of input examples. The Bayesian treatment in RVM can resolve this issue. Specifically, RVM imposes a zero-mean Gaussian prior distribution on w as a constraint.
α is a vector of N + 1 hyperparameters that are independently related to each weighting parameter w. This prior is defined as Г distribution of the following form: For the priors a, b, c, and d to be non-informative, they are set to zero. As the hyperpriors become scale-invariant, the predictions also become unaffected by linear scaling of either t or ϕ(x). To obtain the sparsely distributed weight parameters w, RVM uses Bayes' rule to derive the posterior conditional probability distribution for w: where σ is the posterior covariance and μ the posterior mean.
As it is necessary to compute and optimise the hyperparameters ɑ in a typical RVM training, when the training dataset grows, the range of α may increase to infinity, implying that the matrix Σ does not have an inverse, making relevance vectors impossible to derive. Meanwhile, computational efficiency is also compromised. Therefore, this paper will employ an iterative expectationmaximisation (EM) algorithm for RVM training.
The EM algorithm can directly avoid the step of optimising hyperparameters. This algorithm uses sparse Bayesian treatment and follows steps below (i). Initialisation: initialise weight ω (1) and variance (σ 2 ) 1 (ii). Expectation step: Use ω K and (σ 2 ) k from iteration k to estimate the next iterations and E(ωω T ), where: the covariance Σ = ϕς k ϕ T + (σ) 2 I, ς k = diag ω 0 k 2 , ω 1 k 2 , …, ω N k 2 and ω k , ω k+1 are the weights at iteration k and k + 1. (iii). Maximisation step: use ω k+t obtained in the second step to calculate the variance where trace is the trace of a matrix. (iv). Convergence threshold: We set an empirical threshold which is normal a very small number , the iteration will terminate if ∥ ω k + 1 − ω k ∥ < δ, if not go back to the second step and start a new iteration.
The battery data used to conduct this experiment are obtained from the open-source, life cycle test data repository of the National Aeronautics and Space Administration (NASA) Ames Prognostics Centre of Excellence (PCoE) [10]. To measure the RUL prediction error of the algorithm, we define error AE and relative error RE as: where R: actual RUL value and R ⌣ : predicted RUL value. First, we implement the RUL estimation with battery No. 5 in this dataset, in which different starting points are selected. These starting points are selected, namely the 40th, 60th, and the 80th cycles. RUL prediction results are shown as Table 2.
From Table 2, both the absolute and relative predicted RUL errors are <10 cycles at different starting points for battery number five. All the actual RUL values are located within the confidence intervals (CI). Fig. 7 plots the real data and the predictive point estimates for battery No. 5 starting at the 80th cycle. Lead-acid batteries are also prone to capacity degradation, even when on stand-by mode and not actively used to provide power, natural aging may still occur, threatening the health of the battery itself and the vessel [6]. The current state of the art in lead-acid battery monitoring is the 'Ohmic' testing. The test ascertains the internal resistance and impedance levels. In this research, we use a 'DC' pulse test which gives a single short duration pulse of constant current extracted from the battery cell being tested. The voltage response of the cell is measured at the cell terminals, and the voltage and current are recorded. The aim is to estimate the actual state of charge (SOC) of the battery by utilising the data from the voltage drop duration. The following parameters are required during the DC pulse test: full voltage, zero-centred voltage, charge current, and load current. In each test, we started from 100% capacity of that battery and repeated the DC pulse tests after loading 5% of the rated capacity each time, until 40% of the capacity was loaded. We repeated this experiment 46 times which gave us sufficient number of data for training. In each DC pulse time, we selected eight data points representing the voltage changing curve.
The k-nearest neighbour algorithm (KNN) is a non-parametric method used for classification and regression [11]. KNN is a type of instance-based learning, or lazy learning, where the function is only approximated locally and all computation is deferred until classification. Both for classification and regression, it can be useful to assign weight to the contributions of the neighbours, so that the nearer neighbours contribute more to the average than the more distant ones.
The KNN model was trained using data from 15 tests (135 labelled vectors), and tested on the rest 31 test data (279 unlabelled vectors). The result shows that 266 vectors were classified correctly which gives us 95.5% accuracy. A summary of the machine-learning methods and prediction error are detailed in Table 3.

Energy system analysis
To make recommendations about optimisations strategy for the hybrid energy system, a number of cases are evaluated, the cases are: 1. Engine Only: This case gives a baseline from the empirical data used to contextualise the other simulations. 2. Battery Only: This case assesses the performance of a system working solely from battery power. 3. εicro-cycling: The engine is set with a preferred power set-point and any power demand greater than this is supplied by the battery. When the power demand is less than the set-point then the excess is fed into charging the battery until fully charged. 4. Full-cycling: This scenario is similar to the battery only scenario but a constraint is placed on the system such that the battery switches between a charging and discharging state. This means that during a discharge period, the battery will only supply energy until fully discharged, at which point the battery will change state and will only absorb energy until full which triggers a change in state back to discharge. This ensures only full charge cycles are encountered by the battery.
Two power demand profiles were derived from two test runs (290,616 and 071,216). A plot of the raw demand data for both datasets are included in Fig. 8 to give a comparison to the runs. From visual inspection, one can observe that the 290,616 dataset contains a larger number of peaks than that of the 071,216 dataset despite being a run of the same route. This shows that there is the power demands that do not remain the same for the same route though the distribution of power remains similar. This represents a difference in energy demand of 32% between datasets.
Based on the vessel run (071,216), with a total power demand of 175 kWh, the diesel engine fuel consumption was 187.6 (l/h), with simulated hybrid system fuel consumption at 29.6 (l/h). For test run (290,616), total power demand represented 342 kWh, with diesel consumption of 197 (l/h) and with simulated hybrid system at 53.3 (l/h) (see Table 4).
The preliminary results of our simulation show that a saving of 70 to 80% in fuel can be achieved. This is because when propulsion energy requirements are low such as during constant speed navigation between stops the diesel engine is operated up to a maximum power output of 350 kW, also if power demand is below the value of 350 kW, the surplus energy is stored in the battery. When power requirements are higher than 350 kW, such as during docking and un-docking manoeuvres the energy from the battery is used.