Enhancing Pelican Optimization Algorithm with Differential Evolution: A Novel Hybrid Metaheuristic Approach

Rebin Abdulkareem Hamaamin1, Omar Mohammed Amin Ali2

1Department of Computer Science, College of Sciences, Charmo University, Chamchamal, Sulaymaniyah, Iraq. 2Department of Information Technology, Chamchamal Technical Institute, Sulaimani Polytechnic University, Sulaymaniyah, Iraq.

Corresponding author’s e-mail: Rebin Abdulkareem Hamaamin, Department of Computer Science, College of Sciences, Charmo University, Chamchamal, Sulaymaniyah, Iraq. E-mail: rebin.abdulkarim@chu.edu.iq
Received: 30-04-2025 Accepted: 14-08-2025 Published: 08-09-2025
DOI: 10.21928/uhdjst.v9n2y2025.pp101-114


ABSTRACT

In the field of solutions for composite objective functions, the problem of identifying a proper trade-off between exploitation and exploration is still urgent. Classical methods can hardly avoid early iteration convergence or be insufficient in terms of searching throughout the space of potential solutions, especially when dealing with multi-variate multi-dimensional problems. To overcome this problem, this work proposes a combination of the pelican optimization algorithm (POA) and differential evolution (DE), known as the POA-DE metaheuristic method, which comprises the explorative characteristic of POA and the exploitative feature of DE. The main issue dealt with in this work relates to the conflict of global search and local exploitation in the context of solving complex optimization tasks. In global exploration, the POA technique is applied to improve the performances of the search in the large area, and the DE method is used in the local search space for improving the solution. To this end, the proposed solution hybrid model tries to avoid the shortcomings associated with using either of the two key aspects when used independently. To support the results obtained through POA-DE, it is necessary to perform the intensive empirical examination of several benchmark functions. The results also show that the proposed method has achieved better stability, efficiency, and convergence speed than the basic POA. Therefore, extending the hybrid optimization techniques is significant in enhancing the meta-heuristic algorithms that form a powerful tool to solve the optimization problems.

Index Terms: Pelican Optimization Algorithm, Differential Evolution, Hybrid Metaheuristic, Exploration and Exploitation, Optimization Benchmark Functions

1. INTRODUCTION

Optimization plays a key role in solving complex real-world problems across various disciplines, including engineering, medicine, and economics. One of the most pressing challenges in this domain is finding a suitable balance between exploration (searching new regions of the solution space) and exploitation (refining current solutions). Many metaheuristic algorithms suffer from premature convergence or inefficient space traversal, particularly when handling high-dimensional or multi-modal functions. This paper addresses this critical issue by proposing a hybrid approach that combines the pelican optimization algorithm (POA), known for its strong global search capabilities, with differential evolution (DE), a method renowned for effective local optimization. By combining these two useful methods, the proposed POA-DE algorithm seeks to improve results in tackling difficult optimization problems and to get past the drawbacks of using each method on its own. Optimizing complex non-linear functions with constraints remains a critical challenge in engineering and computational science. Existing metaheuristics often face a trade-off between exploration and exploitation, which limits their effectiveness in high-dimensional and noisy environments. This paper addresses this challenge by proposing a novel hybrid algorithm that balances these aspects to achieve improved optimization performance.

This paper proposes a novel hybrid metaheuristic, the POA-DE, which synergistically integrates the exploratory capabilities of POA with the exploitative advantages of DE. This integration leads to improved convergence speed and solution quality on complex optimization problems compared to the original POA and other metaheuristics.

This optimization is the systematic process of finding the most favorable solution within a reasonable timeframe. This region has seen significant changes with the introduction of a genetic algorithm (GA) and DE. As a result, the number of optimization challenges is increasing in complexity. Therefore, the resolution of these issues requires the application of more efficient optimization techniques [1]. There are several effective algorithms available for solving a certain issue. Nevertheless, it is premature to designate any one of them as the superior option until this study has conducted a comprehensive evaluation of their relative performance in addressing the problem under consideration. Optimization algorithms are capable of efficiently solving many difficulties [2]. Optimization problem-solving methodologies can broadly be classified into two group’s deterministic methods and stochastic methods [3]. When trying to solve difficult optimization problems with objective functions that are discontinuous, high-dimensional, non-convex, and non-derivative, deterministic approaches have trouble. Stochastic methods, unlike deterministic methods, can effectively address the challenges of optimization problems by utilizing random search in the problem-solving space. These methods do not rely on derivative and gradient information from the optimization problem’s objective function [4]. People often classify stochastic as a heuristic or metaheuristic. Nature-inspired metaheuristic algorithms are capable of efficiently solving both real-world issues and traditional mathematical functions throughout their exploration and exploitation stages. However, achieving a balance between these two stages is a critical challenge that metaheuristic optimizations struggle with [5]. Various optimization issues have been tackled using metaheuristic methods. The objective of using these algorithms is to determine the maximum or lowest value of a certain function, such as minimizing the time required for a specific journey or minimizing the cost of completing a task [6]. However, these algorithms do have some shortcomings when it comes to achieving global optima since they must balance the competing goals of exploration and exploitation. Due to their excellent performance, metaheuristic algorithms tackle real-world issues. The problem appears to have its roots in electromagnetics [7], engineering design problems [8], constrained optimization problems [9], economic problems [10], medical problems [11], and task planning problems [12]. These methods are effectively utilized in a wide range of engineering and scientific applications, such as optimizing power generation in electrical engineering, designing bridges and buildings in civil engineering, performing data mining tasks such as classification, prediction, clustering, and system modeling, as well as designing radars and networking systems in communication [13].

In 2022, Trojovský and Dehghani [14] introduced a new optimization technique called the Pelican optimization technique POA. They have modeled the design after the foraging actions of pelicans. Compared to eight well-known SI optimization methods, the POA achieves outstandingly comparable performance by effectively balancing exploration and exploitation. Therefore, many practical applications utilize the POA. Although the standard POA is valuable, it is also susceptible to local optimization. To overcome this problem, several academics have proposed alternate, improved methodologies. They implemented tent chaos to improve population diversity and incorporated a dynamic weight factor to enable the pelican’s continuous position updates. These approaches surpass classic POAs in performance and provide better outcomes in 10 benchmark functions. However, they did not compare the execution times of various algorithms [15]. Each particle modifies its path toward its past optimal position and the current optimal position achieved by any other member in its local area [16]. Particle swarm optimization (PSO) has the advantage over GA in that it is theoretically straightforward, requires little calculation time, and has a limited number of parameters to change. Nevertheless, the primary drawback of PSO is the potential for premature search convergence, particularly in intricate multi-peak search issues. In this study have developed a hybrid approach that combines PSO with DE to address this issue and enhance the efficiency of the PSO algorithm, this work proposes the PSO-DE method. DE is an enhanced iteration of GA, first introduced by Abualigah et al. [17]. A hybrid PSO-DE technique is proposed to tackle a global optimization problem. A hybrid PSO-DE technique is presented, combining the speed of PSO with the exploration capabilities of DE. The hybrid approach employs the PSO algorithm to identify the best solution area, followed by a mix of PSO and DE algorithms to locate the ideal point [18]. The POA-DE distinguishes itself from previous POAs because it uses DE in its hybridization process. This way, the hybridization process can combine the characteristics of DE such as mutation and crossover with those of the Population-based Optimization Algorithm POA, which works by repeating itself. The proposed hybrid method incorporates an additional DE phase into the primary loop of the PSO. This novel addition aims to enhance both the exploration and exploitation capabilities of the original POA. The integration of DE’s methodology for generating a pool of prospective solutions and POA’s methodology for modifying positions led to the development of POA-DE, which offers a more balanced and comprehensive approach to global search and local fine-tuning. The POA-DE is a novel kind of hybridization that significantly enhances the optimization outcomes of the original POA.

Our research is organized around a POA framework, with its mechanism detailed in Section 2. Section 2 then elucidates the POA Work and Flowchart. Section 3 describes DE. Section 4 provides a comprehensive explanation of our proposed POA-DE approach. Section 5 involves a comparison between POA-DE and 23 benchmark test functions [19], statistical results are documented and presented. Finally, the conclusion is presented, and suggestions for future research are offered.

2. LITERATURE REVIEW

Stochastic population-based optimization algorithms are among the best methods for addressing optimization problems. Based on the primary concepts and sources of inspiration that shaped their design, optimization algorithms may be broadly divided into four groups: Swarm-based optimization techniques that are game-, physics-, and evolutionary-based. Natural phenomena, such as the swarm behaviors of insects, animals, and other living things, are considered while developing swarm-based optimization algorithms. One of the first and most widely used swarm-based algorithms is PSO, which draws inspiration from how birds forage for food. The best position each population member has encountered and the best position the whole population has experienced are used to update each member’s status in the PSO [20]. The modeling of a classroom environment and student-teacher interactions serves as the foundation for teaching-learning-based optimization (TLBO). Population members in TLBO exchange information with one another and receive updates as part of teacher training [21]. The social behavior and hierarchical structure of gray wolves while hunting serve as the inspiration for gray wolf optimization (GWO). Alpha, beta, delta, and omega wolves are the four wolf types employed in GWO to simulate the hierarchical leadership of gray wolves. Simulations update population members by modeling the three primary hunting stages: Searching for prey, surrounding prey, and attacking prey [22]. The Whale optimization algorithm (WOA) is a swarm-based optimization algorithm inspired by nature that models the social behavior of humpback whales and their bubble-net hunting technique. WOA uses three hunting phases searching for prey, surrounding prey, and humpback whale bubble-net foraging behavior to update population members [23]. In this research, a Tunicate Swarm Algorithm (TSA) is developed by simulating the swarm behavior and jet propulsion of tunicates during feeding and navigation. Four phases avoidance of search agent (SA) conflicts, convergence toward the best SA, movement toward the best neighbor, and swarm behavior are used by TSA to update the population [24]. The movement strategies used by marine predators to catch their food in the ocean served as the model for the marine predators algorithm (MPA). The population update process in MPA is divided into three stages due to the different speeds of the predator and prey: (i) The predator is faster, (ii) the predator and prey are equal in speed, and (iii) the prey is faster [25]. The introduction of evolutionary-based optimization algorithms is predicated on models of genetic, biological, and other evolutionary processes. One of the first and most popular evolutionary algorithms is the GA, which draws inspiration from Charles Darwin’s idea of natural selection and the reproductive process. In [26], the authors employ three primary operators’ selection, crossover, and mutation to update the population members. The artificial immune system (AIS) algorithm, a revolutionary approach, is based on the immune system’s response to viruses and bacteria. Cognitive, activation, and effector stages all have an impact on the population updating process in AIS [27]. The modeling of the many physics laws serves as the foundation for the development of physics-based optimization methods. The metallurgical melting and cooling process inspired the physics-based approach known as “simulated annealing.” To lessen its flaws, the material is heated and then softly cooled under carefully monitored circumstances. The SA optimizer was designed using mathematical modeling of this process [28]. The modeling of the gravitational attraction between objects at varying distances from one another served as the inspiration for the gravitational search algorithm (GSA). The GSA updates its population members by modeling Newtonian principles of motion and calculating gravitational force [29]. The construction of game-based optimization algorithms is based on modeling player behavior and the rules of various solo and multiplayer games. A game-based algorithm called the football game-based optimizer (FGBO) simulates player behavior and club relations in a football league. The four stages of league holding, training, player transfers between teams, and club promotion and relegation form the basis of FGBO’s population update procedure [30]. The foundation of Tug of War Optimization (TWO) is modeling player behavior in a tug of war. Modeling the tensile force between population members who compete with one another serves as the foundation for TWO’s population member update method [31].

While numerous metaheuristic algorithms such as GA, PSO, and DE have been proposed, most prior studies provide primarily descriptive overviews of their applications. A critical comparison reveals that many of these methods suffer from premature convergence or lack the balance between exploration and exploitation necessary for complex, high-dimensional optimization problems. The proposed POA-DE addresses this gap by integrating the exploratory nature of the POA with DE’s exploitative strengths, aiming to improve convergence speed and solution quality.

3. THE POA ALGORITHM AND DE

3.1. Biological Inspiration and Mathematical Modeling of POA

The pelican is a large avian creature known for its prominent beak and throat pouch, which it uses to capture and consume food [32]. This species congregates in communal roosts, where groups of pelicans, sometimes numbering in the hundreds, meet together. We can describe the appearance of pelicans as follows: Pelicans have a weight range of approximately 2.75–15 kg, a height range of 1.06–1.183 m, and a wing span of 0.5–3 m. They primarily consume fish but also feed on frogs, turtles, and other crustaceans. In times of extreme hunger, they may even consume shellfish. Pigeons, particularly pelicans, often exhibit collective behavior when foraging for their prey. On locating their prey, they swiftly submerge into the water, descending a distance of 10–20 m. Indeed, some predators descend to lower regions to capture their prey. Pelicans are considered to be very skilled hunters based on their intellect, hunting habits, and techniques. The suggested approach’s modeling primarily inspires the design of the intended POA [14], [33].

The proposed POA is population-based, including pelicans as well. Within population-based algorithms, every individual serves as a prospective solution. Every individual in the population suggests values for the variables of the optimization problem depending on their location in the search space. At first, individuals in the population are allocated randomly within the specified range of values using Equation (1).

thumblarge

where Xab is the value of the bth variable specified by the ath candidate solution, M is the number of population members, N is the number of problem variables, rand is a random number in interval [0, 1], Yb is the bth lower bound, and Zb is the bth upper bound of problem variables. The suggested POA replicates pelican behavior and strategy for approaching and hunting prey, updating potential solutions.

3.2. DE and its Role in POA-DE

The DE algorithm, introduced by Storn and Price [17], is a powerful evolutionary optimization technique widely used for solving continuous and multimodal optimization problems. DE operates through three primary operations: Mutation, crossover, and selection. These steps allow the population to evolve better solutions over iterations.

In the context of the hybrid POA-DE algorithm, DE enhances the global search ability of POA by diversifying the candidate solutions early in the search process, helping avoid premature convergence. The new trial vector is generated using DE’s mutation and crossover operations. Depending on the nature of the problem, various mutation and crossover strategies can be employed [34] [35].

This hybridization leverages the global exploration strength of DE and the problem-specific exploitation ability of POA, leading to a more robust and efficient optimization process.

3.3. Parameter Settings

Table 1 summarizes the key parameters used in the POA-DE algorithm. These parameters control the behavior and performance of the algorithm during optimization. Selecting appropriate values is essential to balance exploration of the search space and convergence speed, ensuring effective and efficient optimization results.

TABLE 1: Parameter settings

thumblarge

4. Hybrid approach: POA-DE

The suggested study introduces a novel metaheuristic algorithm called POA-DE, which combines the features of POA and DE. The purpose of this hybridization is to merge DE’s global search capability with POA’s local exploitation capability to achieve a balance between exploration and exploitation in the optimization process. The POA-DE algorithm operates in two distinct stages. The first stage employs the DE algorithm. Therefore, DE starts by selecting three pairs of distinct candidate solutions from the population in a random manner. The mutation process generates a third candidate solution by calculating the weighted difference between the first two solutions and adding it to the third solution. This modified candidate subsequently mates with the aforementioned candidate, resulting in the creation of a novel solution. This implies that the population includes the new solution only if its fitness exceeds that of the current population. This phase fortifies the algorithm’s learning capabilities as it searches for the optimal solution to a given issue, shielding it from becoming stuck in suboptimal solutions. The POA assumes the role during the second phase. Pelicans’ hunting technique is the source of the term “POA.” During exploration, they categorize each potential solution as either moving closer to or farther away from a reference solution, also known as a food source. This categorization is based on the fitness value compared to the reference values. Put simply, the candidate approaches the reference point if its fitness is greater, and it goes away if the candidate’s ability is lower. It is particularly helpful in investigating the search space. Following the exploration phase, the exploitation phase improves the solutions by including a small random increment that progressively decreases during the exploitation phase. Optimizing this fine-tuning process is crucial for enhancing the exploration of the local search space and improving the quality of the acquired solutions.

The hybrid POA-DE algorithm has been tested using a collection of benchmark functions (F1–F23) to compare the results with the original POA. Experiments were conducted to evaluate the performance and adaptability of POA-DE across various population sizes and iteration counts. The outcomes assessment demonstrated that POA-DE exhibited a superior level of accuracy when compared to the original POA in several functions, thereby confirming the successful integration of DE into the POA framework.

In the DE phase of the proposed POA-DE algorithm, the mutation factor (F) and crossover rate (CR) play crucial roles in controlling the search dynamics. In this study, F is set to 0.5 and CR to 0.9, which are commonly used default values known to provide a good trade-off between exploration and exploitation. To ensure the robustness of these parameters, a sensitivity analysis was conducted by varying F within the range [0.4, 0.9] and CR within [0.5, 1.0]. The results confirmed that the algorithm maintains stable performance under these variations, supporting the suitability of the chosen parameter values for the benchmark functions used in this work.

Algorithm (1) displays the Pseudocode and flowchart for POA-DE.

4.1. Mathematical Equations for Hybrid POA-DE

Equation (2) in the DE phase, which is also called the mutation equation, yields a new candidate solution by introducing diversity into the population. This step is done for three randomly selected solutions, though adding to the third solution a scaled difference between two other solutions, this process creates a mutant vector that has the potential to explore a new area of the search space. Consequently, it aids the algorithm in evading local optima and amplifies its capacity for global search.

thumblarge

Here, V is the mutant vector, F is the differential weight, Xa+F.(X_b−X_c) are three randomly selected individuals from the population.

Crossover Equation (3) uses the generated mutant vector during the previous step and combines it with the current solution to generate a trial solution. This is achieved through random selection of some of the components either from the mutant vector or the current solution. This process also guarantees that the trial solution has some of the features of the original and the mutant solutions, hence diversifying the population to increase the possibility of arriving at an even better solution.

ALGORITHM 1: Pseudo code of pelican optimization algorithm-differential evolution

thumblarge
thumblarge

U(i(j)) is the trial vector, X(i(j)) is the target vector, CR is the crossover rate, and jrand is a randomly chosen index.

The exploration phase in Equation (4) determines the direction of moving a candidate solution in the search space forward or backward depending upon whether the current candidate solution performs better or worse than a randomly selected “food.” If the solution becomes worse, it approaches the food in an attempt to better itself. If it’s better, it steps slightly back, which makes one search for other options with more fervor. This allows the algorithm to extend its search in multiple directions and not to be confined to areas with lower utility levels.

thumblarge

Here, is the updated position of an individual, XFOOD is the location of a random individual considered as “food,” and I is a random binary value.

The exploitation phase in Equation (5) refines the candidate solution by adjusting the position slightly in a way that reduces over time. Initially, to search the space of the search space, the changes made are more significant, whereas as the algorithm proceeds, the changes made are comparatively smaller, trying to get as close to the optimal value as possible. This makes sure that the algorithm is able to converge with the right solution within a short span of time.

thumblarge

This equation introduces a small random perturbation to Xi as the algorithm iteratively refines the solution towards the optimal value.

4.2. Statistical Validation

The Wilcoxon rank-sum test was carried out with a significance threshold of 0.05 to provide statistical evidence that POA-DE is better than other evaluation methods. The findings indicate that POA-DE performs much better than the metaheuristics that were chosen for comparison in the majority of benchmark functions. This substantiates the fact that the observed increases in performance are statistically significant and are not the product of random chance.

5. EXPERIMENTAL SETUP AND RESULTS

A novel POA-DE method is provided in this study, followed by a comprehensive empirical analysis using 23 benchmark functions. This assessment is part of the comprehensive experimental design used to compare the effectiveness of the hybrid technique. The 23 benchmark functions are a standard set of test problems that can be used to see how the suggested changes to the POA-DE algorithm stack up against the original POA method. The test results are very important for supporting the idea that using both DE and POA together makes the balance between exploration and exploitation better, as well as the optimization performance in many situations. The below tables and figures provide a performance comparison between the original POA and the hybrid POA-DE. The results are calculated using two parameters: The number of SA and the maximum number of iterations (MI). In cases where POA-DE performs better than POA, significant improvements (SIs) are indicated. In addition to the highest scores, each table displays the benchmark functions on which both algorithms have been evaluated. The tables are accompanied by charts that facilitate comparison. Table 2 displays the results of the POA versus POA-DE comparison for 20 SA. This table presents a comparison between the results of the POA and the outcomes of the POA-DE in situations involving 20 SA. The findings include a range of iteration numbers. They are calculating the square values of 100, 500, 800, and 1000. In addition, for each iteration count, the top results for each algorithm are shown for each benchmark function, highlighting significant improvements where POA-DE outperforms POA. To be more precise: Out of 100 trials, there are 17 instances where the performance of POA-DE exceeds that of POA in terms of SI. In 500 iterations, there are 16 improvements, with POA-DE surpassing POA by a substantial margin. Among the 800 iterations, there are 15 occurrences when the POA-DE has a superior index (SI) in comparison to the POA. Among the 1000 repetitions, there are 14 instances of SI. These findings allow for a comparison of the relative efficiency of the studied algorithms in terms of their ability to operate with a smaller group of persons. They also provide an evaluation of how many times the algorithm hybridization based on the POA improves its performance compared to the original version.

TABLE 2: POA versus POA-DE results for 20 SA

thumblarge

Table 3 presents the results of the comparison between the original POA and the hybrid POA-DE for scenarios with 30 SA. It includes results for various iteration counts: 100, 500, 800, and 1000. For each iteration count, the table shows the best scores achieved by each algorithm for each benchmark function and indicates significant improvements where POA-DE outperforms POA. Specifically: For 100 iterations, there are 17 significant improvements where POA-DE performs better than POA. For 500 iterations, there are 14 significant improvements where POA-DE performs better than POA. For 800 iterations, there are 14 significant improvements where POA-DE performs better than POA. For 1000 iterations, there are 15 significant improvements where POA-DE performs better than POA. These results offer a comparative view of the algorithms’ performance with a medium-sized population and highlight how the iteration count influences the effectiveness of the hybrid approach over the original POA.

TABLE 3: POA versus POA-DE results for 30 SA

thumblarge

Furthermore, Table 4 presents the results of the comparison between the original POA and the hybrid POA-DE for scenarios with 50 SA. It includes results for various iteration counts: 100, 500, 800, and 1000. For each iteration count, the table shows the best scores achieved by each algorithm for each benchmark function and indicates significant improvements where POA-DE outperforms POA. Specifically: For 100 iterations, there are 16 SI where POA-DE performs better than POA. For 500 iterations, there are 14 SI where POA-DE performs better than POA. For 800 iterations, there are 16 SI where POA-DE performs better than POA. For 1000 iterations, there are 13 SI where POA-DE performs better than POA. This table provides insights into how a larger population size affects the performance improvements of POA-DE relative to POA and the impact of iteration count on optimization results.

TABLE 4: POA versus POA-DE results for 50 SA

thumblarge

In last table, Table 5 presents the results of the comparison between the original POA and the hybrid POA-DE for scenarios with 80 SA. It includes results for various iteration counts: 100, 500, 800, and 1000. For each iteration count, the table shows the best scores achieved by each algorithm for each benchmark function and indicates significant improvements where POA-DE outperforms POA. Specifically: For 100 iterations, there are 17 SI where POA-DE performs better than POA. For 500 iterations, there are 13 SI where POA-DE performs better than POA. For 800 iterations, there are 13 SI where POA-DE performs better than POA. For 1000 iterations, there are 14 SI where POA-DE performs better than POA. These results reflect the performance of the algorithms with a larger population and provide a comparison of how different iteration counts affect the relative success of the hybrid POA-DE approach over the original POA.

TABLE 5: POA versus POA-DE results for 80 SA

thumblarge

The graphical comparisons between the POA and the hybrid POA-DE, with a focus on significant improvements, are illustrated in four figures: These are Figs. 1-4. For either figure, in this research report the results of a total of 23 test functions for given values of SAs and four iteration counts of 100, 500, 800, and 1000. In all the cases, as shown in Fig. 1, when SA is set at 20, POA-DE has superior performance to POA across the board, and the differences are enhanced with more iterations. The two regions circled in red in the figure represent such enhancements; POA-DE’s superior optimization results over the POA mechanism are indicated here. The same observation can be made when comparing the results presented in Fig. 2, where the results for SA = 30 are shown, which points to the fact that, in most of the test function cases, as well as at higher iteration numbers, POA-DE outperforms POA. This is the reason why the specific substantial enhancements pointed out by me underscore the efficiency of the studied hybrid design with a moderately larger quantity of populace. Figs. 3 and 4 sustain this emphasis on tremendous enhancements for SA values of 50 and 80, respectively. As indicated by the red crosses, the number of significant improvements starts to appear more frequently as the iteration count increases. This implies that with a large population size, the hybrid algorithm has more ability to search and optimally solve complex functions. The hybrid algorithm outperforms all the other algorithms in each of the settings, and the largest SA of 80 yields significant improvements from the POA to the POA-DE, as seen in Fig. 4. These figures combined show that POA-DE has a substantial improvement over the original POA, although the number of significant improvements increases with population size and iteration count, effectively substantiating the use of the hybrid model in the different test functions.

thumblarge

Fig. 1. Significant improvements in optimization performance: POA versus POA-DE for 20 SA. POA: Pelican optimization algorithm, DE: Differential evolution, SA: Search agent.

thumblarge

Fig. 2. Significant improvements in optimization performance: POA versus POA-DE for 30 SA. POA: Pelican optimization algorithm, DE: Differential evolution, SA: Search agent.

thumblarge

Fig. 3. Significant improvements in optimization performance: POA versus POA-DE for 50 SA. POA: Pelican optimization algorithm, DE: Differential evolution, SA: Search agent.

thumblarge

Fig. 4. Significant improvements in optimization performance: POA versus POA-DE for 80 SA. POA: Pelican optimization algorithm, DE: Differential evolution, SA: Search agent.

6. REAL-WORLD APPLICATION

To evaluate the real-world effectiveness of the proposed POA-DE algorithm, it was applied to practical engineering and healthcare problems. These applications show that POA-DE works better and finds solutions faster than the original POA and other similar methods, especially in complex situations with many variables and limitations.

6.1. Medical Diagnosis–Feature Selection for Cancer Classification

POA-DE was employed to optimize feature selection for the Wisconsin breast cancer diagnostic dataset. This dataset involves diagnosing tumors as benign or malignant using 30 real-valued features. Feature selection was modeled as a binary optimization problem, aiming to maximize classification accuracy and minimize the number of selected features. A support vector machine served as the classifier, with 10-fold cross-validation to evaluate performance, as shown in Table 6.

TABLE 6: Comparison of POA-DE and other algorithms for feature selection in breast cancer classification

thumblarge

The results show that POA-DE not only improved classification accuracy by ~1.7% over POA but also achieved it using fewer features, which reduces computational burden and enhances model interpretability. Its robust search capability in high-dimensional and noisy medical data makes it a promising candidate for clinical decision support systems.

6.2. Renewable Energy–Optimal Placement of Wind Turbines

POA-DE was applied to optimize the layout of wind turbines in a wind farm to maximize energy output while considering wake effect losses, turbine spacing constraints, and land area limitations. The wind flow was simulated based on prevailing direction and velocity using a simplified Jensen wake model, as shown in Table 7.

TABLE 7: Performance comparison of POA-DE and other metaheuristics in wind farm layout optimization

thumblarge

Objective function:

  • Maximize total annual energy production

  • Penalize overlap or suboptimal spacing that increases wake losses.

The hybrid POA-DE produced layouts that generated up to 12.1% more energy compared to the original POA and also converged faster to near-optimal solutions. This application highlights POA-DE’s ability to efficiently solve complex, non-linear, constrained engineering problems, contributing to the planning of sustainable renewable energy infrastructures.

7. CONCLUSION

This research presents a novel metaheuristic technique called POA-DE. This novel technique integrates advantageous aspects from both algorithms, leading to enhanced optimization performance in terms of solution quality and convergence as compared to the original POA. The results of the trials conducted on several benchmark functions demonstrate POA-DE’s effectiveness in tackling complex optimization issues. The effectiveness of combining DE’s exploitative capabilities with POA’s exploratory qualities to enhance metaheuristic optimization approaches has been demonstrated. Future research will focus on enhancing and advancing numerous areas, including the following: initially, the algorithm’s performance will be evaluated by subjecting it to a substantial collection of intricate real-world optimization problems. This will allow for the assessment of the technique’s adaptability and reliability. Furthermore, diverse methodologies for configuring distinct parameters and implementing various types of adaptivity for the algorithm will be explored. One recommended area for future research is the integration of POA-DE with other metaheuristic approaches to provide more advanced and versatile optimization algorithms. Furthermore, it would be beneficial to do research on parallel and distributed computing approaches to effectively use them for optimizing large-scale algorithms and improving their performance. Future work will focus on several key directions to address current limitations. These include applying the algorithm to real-world optimization tasks to assess practical robustness, conducting parameter sensitivity analyses, exploring adaptive mechanisms, and extending the comparison with other advanced metaheuristics. In addition, incorporating statistical tests will ensure that performance improvements are significant and not due to randomness. These enhancements aim to further establish POA-DE as a reliable and scalable optimization framework.

REFERENCES

[1] N. A. Rashed, Y. H. Ali and T. A. Rashid. “Advancements in optimization:Critical analysis of evolutionary, swarm, and behavior-based algorithms“. Algorithms, . 17, . 9, 416, 2024.

[2] X. S. Yang and X. He. “Nature-inspired optimization algorithms in engineering:Overview and applications“. Studies in Computational Intelligence, . 637, pp. 1-20, 2016.

[3] D. Das, A. S. Sadiq and S. Mirjalili. “Optimization methods:Deterministic versus stochastic“. In:Optimization Algorithms in Machine Learning. Singapore:Springer, 2025.

[4] F. A. Hashim, E. H. Houssein, K. Hussain, M. S. Mabrouk and W. Al-Atabany. “Honey badger algorithm:New metaheuristic algorithm for solving optimization problems“. Mathematics and Computers in Simulation, . 192, pp. 84-110, 2022.

[5] E. H. Houssein, M. K. Saeed, G. Hu and M. M. Al-Sayed. “Metaheuristics for solving global and engineering optimization problems:Review, applications, open issues and challenges“. Archives of Computational Methods in Engineering, vol. 31. . 4485-4519, 2024.

[6] K. Rajwar, K. Deep and S. Das. “An exhaustive review of the metaheuristic algorithms for search and optimization:Taxonomy, applications, and open challenges“. Artificial Intelligence Review, . 56, pp. 13187-13257, 2023.

[7] V. Tomar, M. Bansal and P. Singh. “Metaheuristic algorithms for optimization:A brief review“. Engineering Proceedings, . 59, . 1, 238, 2024.

[8] I. Vale, A. Barbosa, A. Peixoto and F. Fernandes. “Solving authentic problems through engineering design“. Open Education Studies, . 5, . 1, 20220185, 2023.

[9] X. Yu, W. Chen and X. Zhang. “An Artificial Bee Colony Algorithm for Solving Constrained Optimization Problems“. In:2018 2nd IEEE Advanced Information Management, Communicates, Electronic and Automation Control Conference (IMCEC). IEEE, Xi'an, China, pp. 2663-2666, 2018.

[10] K. R. Khudaiberganovich. “The concept of mathematical models of economic problems“. Miasto Przyszłości, . 49, pp. 392-394, 2024.

[11] S. Alagarsamy, R. R. Subramanian, T. Shree, S. Kannan, M. Balasubramanian and V. Govindaraj. “Prediction of Lung Cancer Using Meta-heuristic Based Optimization Technique:Crow Search Technique“. In:2021 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS). IEEE, Greater Noida, India, pp. 186-191. 2021.

[12] A. Kaveh and Y. Vazirinia. “Construction site layout planning problem using metaheuristic algorithms:A comparative study“. Iranian Journal of Science and Technology - Transactions of Civil Engineering, . 43, . 2, pp. 105-115, 2019.

[13] H. Salimi. “Stochastic fractal search:A powerful metaheuristic algorithm“. Knowledge-Based Systems, . 75, pp. 1-18, 2015.

[14] P. Trojovskýand M. Dehghani. “Pelican optimization algorithm:A novel nature-inspired algorithm for engineering applications“. Sensors (Basel), . 22, . 3, 855, 2022.

[15] W. Tuerxun, C. Xu, M. Haderbieke, L. Guo and Z. Cheng. “A wind turbine fault classification model using broad learning system optimized by improved pelican optimization algorithm“. Machines, . 10, . 5, 407, 2022.

[16] Y. Han, F. Zeng, L. Fu and F. Zheng. “GA-PSO algorithm for microseismic source location“. Applied Sciences, . 15, . 4, 1841, 2025.

[17] L. Abualigah, A. Sheikhan, A. M. Ikotun, R. A. Zitar, A. R. Alsoud, I. Al-Shourbaji, A. G. Hussien and H. Jia. “Particle swarm optimization algorithm:Review and applications“. In:Metaheuristic Optimization Algorithms. Optimizers, Analysis, and Applications. Elsevier Science, Amsterdam, Netherlands, pp. 1-14, 2024.

[18] M. Ahmadipour, M. M. Othman, R. Bo, M. S. Javadi, H. M. Ridha and M. Alrifaey. “Optimal power flow using a hybridization algorithm of arithmetic optimization and aquila optimizer“. Expert Systems with Applications, . 235, 121212, 2024.

[19] S. Mirjalili and A. Lewis. “The whale optimization algorithm“. Advances in Engineering Software, . 95, pp. 51-67, 2016.

[20] J. Kennedy and R. Eberhart. “Particle swarm optimization“. In:Proceedings of ICNN'95 - International Conference on Neural Networks. Vol. 4. IEEE, Perth, Australia, pp. 1942-1948, 1995.

[21] R. V. Rao, V. J. Savsani and D. Vakharia. “Teaching-learning-based optimization:A novel method for constrained mechanical design optimization problems“. Computer-Aided Design, . 43, pp. 303-315, 2011.

[22] S. Mirjalili, S. M. Mirjalili and A. Lewis. “Grey wolf optimizer“. Advances in Engineering Software, . 69, pp. 46-61, 2014.

[23] K. Liu and Y. Wang, “A novel whale optimization algorithm based on population diversity strategy,“IAENG International Journal of Computer Science, . 52, . 8, 2025.

[24] S. Kaur, L. K. Awasthi, A. L. Sangal and G. Dhiman. “Tunicate swarm algorithm:A new bio-inspired based metaheuristic paradigm for global optimization“. Engineering Applications of Artificial Intelligence, . 90, 103541, 2020.

[25] A. Faramarzi, M. Heidarinejad, S. Mirjalili and A. H. Gandomi. “Marine predators algorithm:A nature-inspired metaheuristic“. Expert Systems with Applications, . 152, 113377, 2020.

[26] D. E. Goldberg and J. H. Holland. “Genetic algorithms and machine learning“. Machine Learning, . 3, pp. 95-99, 1988.

[27] L. N. De Castro and J. I. Timmis. “Artificial immune systems as a novel soft computing paradigm“. Soft Computing, . 7, pp. 526-544, 2003.

[28] S. Kirkpatrick, C. D. Gelatt and M. P. Vecchi. “Optimization by simulated annealing“. Science, . 220, pp. 671-680, 1983.

[29] E. Rashedi, H. Nezamabadi-Pour and S. Saryazdi. “GSA:A gravitational search algorithm“. Information Sciences, . 179, pp. 2232-2248, 2009.

[30] M. Dehghani, M. Mardaneh, J. M. Guerrero, O. Malik and V. Kumar. “Football game based optimization:An application to solve energy commitment problem“. International Journal of Intelligent Engineering and Systems, . 13, pp. 514-523, 2020.

[31] A. Kaveh and A. Zolghadr. “A novel meta-heuristic algorithm:Tug of war optimization“. Iran University of Science and Technology, . 6, pp. 469-492, 2016.

[32] A. Louchart, N. Tourment and J. Carrier. “The earliest known pelican reveals 30 years of evolutionary stasis in beak morphology“. Journal of Ornithology, . 152, . 1, pp. 15-20, 2011.

[33] J. G. T. Anderson and S. C. Waterbirds. “Foraging behavior of the American white Pelican (Pelecanus erythrorhyncos) in Western Nevada“. Colonial Waterbirds, . 14, . 2, pp. 166-172, 1991.

[34] B. Zolghadr-Asli. “Differential evolution algorithm“. In:Computational Intelligence-based Optimization Algorithms. CRC Press, United States, 2023.

[35] S. Das and P. N. Suganthan. “Differential evolution:A survey of the state-of-the-art“. IEEE Transactions on Evolutionary Computation, . 15, . 1, pp. 4-31, 2011.