Wireless sensor networks
Sensors
A sensor is a device designed to detect environmental changes or events, such as heat, light, sound, pressure, and magnetism, and transmit this information to other electronic systems, often to a computer processor For instance, an acoustic sensor is one type of sensor that captures sound data.
Figure 1.1 Demonstration of acoustic sensor
Sensor Unit Microcontroller Unit Communication
Figure 1.2 Illustration of an sensor node
Sensor nodes
A sensor node is a critical component in a network, designed to process data, collect sensory information, and communicate with other nodes Typically, a sensor node architecture includes a sensor unit, communication unit, micro-controller unit, memory, and power unit, as illustrated in Figure 1.2.
Sensor coverage model
Sensor coverage models are used to reflect the sensing capability and quality of sensors.
Sensor coverage models aim to quantify the effectiveness of sensors in detecting physical phenomena across various locations These models can be mathematically expressed through coverage or sensing functions that consider distances and angles Typically, most sensing functions exhibit two common characteristics.
Sensing ability decreases as distance increases.
Due to diminishing effects of noise bursts in measurements, sensing ability can improve as the allotted sensing time (exposure) increases.
When a sensor \( s_i \) is positioned at coordinates \( (x_i, y_i) \), the Euclidean distance to a target point located at \( (l_x, l_y) \) is calculated using the formula \( d_{s(i, l)} = \sqrt{(x_i - l_x)^2 + (y_i - l_y)^2} \) According to the Boolean disk coverage model, the sensor will cover the target point if the distance \( d_{s(i, l)} \) is less than or equal to the sensor's coverage radius \( r \).
In the context of sensor networks, the Euclidean distance between a sensor \( s_i \) and a target point is denoted as \( d(s_i, l) \), with a constant \( r > 0 \) representing the sensing range This concept is illustrated in Figure 1.3(a), which showcases a Boolean disk coverage model Additionally, the article introduces the truncated attenuated coverage model, which further refines the understanding of sensor coverage dynamics.
As the distance between a space point and a sensor increases, the coverage measure diminishes significantly In these instances, it may be practical to overlook the coverage measure and apply approximations by truncating it for larger distance values, as indicated in Equation 1.3 Specifically, the coverage function f d s( ( i, l)) is defined as Ce −αd s,z ( ) when the distance (d si, l) is less than or equal to r.
0 otherwise (1.3) where is a parameter representing the physical characteristics of the sensor, and the sensingα r range Figure 1.3(b) demonstrates the truncated attenuated coverage model.
The illustration in Figure 1.3 depicts two models related to sensor coverage: (a) the Boolean disk coverage model, showcasing red stars as target points located both inside and outside the green sensing area of a sensor, and (b) the truncated attenuated coverage model.
Sensing intensity models
The sensing field intensity models specify the collaboration of sensors in the sensing field
[40] There are two types of sensor field intensity are usually applied:
X i=1 f d s( ( i, l)) (1.4) whereN is the number of sensor nodes in the sensor network, i.e the sensor network consists ofN sensors{s 1 , s2, , s N},
I l( ) = ( (f d s ∗ i , l))|d s( ∗ i , l) ≤d s( k , l) ∀k= 1 2, , , N (1.5) wheres ∗ i is the sensor that has the smallest Euclidean distance to the target point at location l.
Terminologies
In barrier coverage, the region of interest (ROI) is typically defined as a long, two-dimensional belt area bordered by two parallel lines Sensors are strategically placed within this ROI to identify intruders attempting to cross into the monitored zone The ROI can either be a closed belt, which has no boundaries, or an open belt region, as illustrated in Figure 1.4.
Protected area Closed belt region
Crossing path Orthogonal crossing path
Crossing path Orthogonal crossing path Open belt region
Figure 1.4 Demonstration of region of interest and crossing path
The usual close belt region is ring belt.
Open belt area is a belt area that has two boundaries orthogonal to the two parallel lines.
Rectangle belt is one of the common open belts studied in existing literatures, and also in this dissertation.
A penetration path, also known as a crossing path, is a route that links two opposing sides within a region of interest (ROI), with the entry and exit points positioned on opposite sides.
[32] For a two dimensional belt, orthogonal crossing paths are straight lines, whose length is equal to the width of the belt, as shown in Figure 1.5.
A static sensor node in a wireless sensor network (WSN) is designed to collect and process sensed data, as well as send and receive messages Once deployed, these nodes remain stationary, enabling efficient data management and communication within the network.
A mobile sensor node possesses the features of static sensor nodes while also exhibiting mobility After deployment, it can navigate through areas with low or no coverage, functioning as a router to facilitate communication and complete recovery tasks effectively.
(b) Any crossing path Perpendicular crossing path igurw
Figure 1.5 Illustration of crossing path types
Wireless sensor network scenarios
Sensor nodes are strategically placed within a region of interest (ROI) to observe various physical phenomena, creating what is known as a sensor field These sensor nodes collectively establish a sensor network, which can be organized and managed in a variety of architectural configurations to suit different applications.
Homogeneous versus heterogeneous wireless sensor networks
A homogeneous wireless sensor network (HoWSN) consists of sensor nodes that possess identical sensing, processing, and communication capabilities, as illustrated in Figures 1.6(b) and (d) In contrast, a heterogeneous wireless sensor network (HeWSN) features sensor nodes with varying capabilities, such as a node equipped with a more powerful sensor unit that can cover a larger area, as shown in Figures 1.6(a) and (c).
Static versus mobile wireless sensor networks
A static wireless sensor network (SWSN) consists of fixed sensor nodes that remain stationary after deployment, as illustrated in Figures 1.6(a), (b), and (c) In contrast, a mobile wireless sensor network (MWSN) is composed entirely of mobile nodes equipped with locomotion capabilities, allowing them to move post-deployment Additionally, a hybrid sensor network features both stationary and mobile nodes The mobility of nodes can enhance the performance of the sensor network, although this often involves higher costs for mobile nodes and increased energy consumption for movement In certain scenarios, a mobile sink may also be present, moving along a defined trajectory to gather sensing data, as depicted in Figure 1.6(d).
Single-Hop versus multi-Hop wireless sensor networks
There are two elementary communication models between sensor nodes and sinks, namely, single-hop communication and multi-hop communication Figure 1.6(a) represents a single-hop
A wireless sensor network (WSN) employs a multi-hop communication strategy, where certain sensor nodes relay their data through intermediary nodes instead of sending it directly to the sink This approach enhances data transmission efficiency by utilizing a network of relays to reach the destination.
Figure 1.6(c) depicts a hierarchical multi-hop sensor network.
Normal Sensor node Advanced Sensor node
This article illustrates various sensor network scenarios, including a single-hop, heterogeneous stationary network; a multi-hop, homogeneous stationary network; a multi-hop, heterogeneous stationary network; and a single-hop, homogeneous stationary network featuring a mobile sink.
Optimization problems
Optimization is integral to countless applications across various industries and scientific fields Every organization encounters optimization challenges, as processes can often be refined to minimize time, cost, and risk, or to maximize profit, quality, and efficiency For instance, individuals may select different routes to commute based on preferences for the shortest distance, minimal traffic jams, or fewer traffic lights Similarly, businesses can design networks that balance cost and service quality, or efficiently schedule production to achieve optimal timing.
The optimization problems are often classified into several categories as follows [59]:
Continuous optimization refers to an optimization problem where the variables that define the objective function must be continuous In this context, a continuous variable is one that can take on any value from a set of real numbers.
Combinatorial optimization: An optimization problem is called a combinatorial op- timization if the variables which decides the objective function must be discrete.
Definition 1.2.1 A continuous optimization problem can be defined as follows: minimize maximize ( )/ f x
Subject to : gi( ) 0 = 1 2x ≤ , i , , , m hi( ) = 0 = 1 2x , i , , , p where f x( ) : R n → R, is the objective function which is minimize (maximize) gi( ) 0x ≤ , is the unequal constraints h i ( ) = 0x , is the equal constraints m, n∈N
Definition 1.2.2 A Combinatorial optimization problem P = (D X, f) can be defined by:
An objective function to be minimized (maximized), wheref f D: 1× D 2 × × D n → R + ;
X= {x= ({x 1 , v1) (, x2, v2), , x ( n, vn)}|v i ∈D i ;xi satisfies all the constraints}
In the realm of combinatorial optimization, X is referred to as the search or solution space, where each element represents a potential candidate solution The objective is to identify a solution \( x^* \) within this space that minimizes the objective function value, denoted as \( f(x^*) \leq f(x) \) for all \( x \in X \) When \( f(x^*) \) achieves this minimum, it is recognized as the globally optimal solution of the pair \( (X, f) \), and the subset \( X^* \subseteq X \) comprises all globally optimal solutions.
Combinatorial optimization (CO) problems, such as Timetabling and Scheduling (TaSP), the Travelling Salesman Problem (TSP), and the Quadratic Assignment Problem (QAP), are significant in real-world applications Numerous algorithms have been developed to address these problems, which can be categorized into exact and approximate algorithms Exact algorithms guarantee an optimal solution for every finite instance of a CO problem within a limited timeframe However, for NP-hard CO problems, no polynomial-time algorithm exists, suggesting that exact methods may require exponential computation time in the worst-case scenario, making them impractical Consequently, there is an increasing focus on approximate methods to effectively tackle CO problems.
In approximate methods, one gives up the guarantee of finding optimal solutions for achieving good solutions in a significantly reduced amount of time.
Approximate algorithms
Single-solution-based metaheuristic
Single-solution-based metaheuristics (S-metaheuristics), also known as trajectory methods, encompass local search-based techniques like LS, TB, and ILS These methods focus on enhancing a single solution while addressing optimization problems.
S-metaheuristics utilize iterative procedures to navigate through neighborhoods or search trajectories within a problem's search space, transitioning from one solution to another These methods demonstrate significant efficiency in addressing a variety of optimization challenges across multiple domains.
Iterative improvement, often referred to as simple local search, is a process where each move involves selecting a solution \( x_0 \) from the neighborhood \( N(x) \) of the current solution, but only if the new solution is an improvement over the existing one The algorithm continues until it identifies a local minimum As outlined in Algorithm 1.1, it begins with an initial solution and, during each iteration, the heuristic replaces the current solution with a neighboring solution that enhances the objective function The process concludes when all neighboring candidates yield worse results.
Iteration 1 Iteration 2 Iteration 3 Iteration best neighbor local optimal
Neighbors best neighbor local optimal Neig hbors
Local search employs a binary representation of solutions, utilizing a flip move operator and a best neighbor selection strategy to optimize an objective function, specifically maximizing x^3 - x^2 + x The process culminates when the search reaches a local optimum, exemplified by the final solution x = (11110), derived from the initial solution x0 = (11010) To enhance efficiency, the search may implement a restricted neighborhood strategy, focusing on a subset of candidate solutions from larger neighborhoods Variants of local search can be categorized based on the generation order of neighboring solutions—either deterministic or stochastic—and the method of selecting these neighboring solutions.
Figure 1.8 Illustration of local search behavior in a given landscape
Algorithm 1.1: Diagram of local search algorithm s←s 0 ; t ←0; repeat
Generate N s( ( )); /* Generate of candidate neighbors */ s←s 0 ;/* Select a better neighbor s ∈N( )*/;s t←t+ 1; untilTermination Criterion(P(t)); output final solution found or local optima;
Population-based metaheuristics
Population-based metaheuristics (P-metaheuristics) enhance multiple candidate solutions by leveraging population characteristics to navigate the search space effectively These algorithms initiate with a set of solutions and iteratively generate new populations while replacing the existing ones This two-phase process, consisting of generation and replacement, continues until predefined stopping criteria are met, ensuring a systematic exploration of potential solutions.
Figure 1.9 Demonstration of main principles of P-metaheuristic
P-metaheuristics are inherently more exploratory due to their diverse initial populations, while S-metaheuristics focus more on exploitation The significance of the initial population selection is frequently overlooked in the design process.
P-metaheuristic However, this step plays an important role in the effectiveness of the algo- rithm and its efficiency Hence, we should pay more attention to this step.
In the generation of the initial population, the main criterion to manage is diversification.
If the initial population is not well diversified, a premature convergence can occur for any
P-metaheuristic For example, this may happen if the initial population is generated using a greedy heuristic or a S-metaheuristic (e.g., local search, tabu search) for each solution of the population [72].
Many stopping criteria based on the evolution of a population may be used Some of them are similar to those designed for S-metaheuristics.
In a static procedure, the endpoint of the search is predetermined, allowing for the use of a fixed number of iterations, a limitation on CPU resources, or a maximum count of objective function evaluations.
An adaptive procedure involves a search process where the endpoint cannot be predetermined It allows for a fixed number of iterations or generations, continuing until an optimal or satisfactory solution is achieved, such as reaching a specific error level relative to the optimum or an approximation when a lower bound is established in advance.
Stopping criteria specific to P-metaheuristics often rely on statistical measures of the current population's diversity These criteria are designed to halt the algorithm when the diversity falls below a predetermined threshold, indicating a state of population stasis Continuing the execution of a P-metaheuristic becomes ineffective when the population reaches this stasis, making it essential to monitor diversity levels for optimal performance.
Evolutionary algorithms are stochastic population-based metaheuristics that have proven effective in addressing a wide range of complex real-world problems They are among the most researched algorithms in the field, demonstrating significant success in tackling challenging optimization issues across various domains.
Diagram of an evolution algorithm is presented in Algorithm 1.2
Algorithm 1.2: Diagram of an evolution algorithm
P t( + 1) ← Replace(P t , P( ) 0 ( )) ;t t←t+ 1; untilTermination Criterion(P(t)); output best individual or best population found.;
Evolutionary Algorithms (EAs) are iterative optimization techniques inspired by natural selection, simulating the evolution of species They begin with a randomly generated population, where each individual represents a potential solution encoded in a specific format An objective function assigns a fitness value to each individual, reflecting its effectiveness in solving the problem at hand In each iteration, individuals are selected as parents based on their fitness, with those exhibiting higher fitness having a greater chance of selection These selected individuals undergo reproduction through variation operators, such as crossover and mutation, to produce new offspring A replacement scheme is then employed to determine which individuals—parents or offspring—will continue to the next generation This iterative process continues until a predetermined stopping criterion is met, effectively paralleling the evolutionary process with optimization problem-solving.
In evolutionary algorithms, the genotype represents the encoding while the phenotype rep-
In evolutionary algorithms, each generation produces a solution that requires decoding the genotype to create the phenotype Variation operators function at the genotype level, while the fitness function evaluates the phenotype's performance.
Table 1.1 Evolution process versus solving an optimization problem
Evolution Problem solving Individual Solution
Fitness Objective function Environment Optimization problem
Locus Element of the solution
The allele value of an individual, as illustrated in Figure 1.11, reflects its fitness, which measures the individual's ability to survive in its environment When direct encoding is employed, the genotype closely resembles the phenotype In contrast, with indirect encoding, the genotype and phenotype are distinct structures, necessitating a decoding function to convert the genotype into a phenotype.
Figure 1.11 Genotype versus phenotype in evolutionary algorithms.
Common concepts for evolution algorithms
The main search components for designing an evolutionary algorithm are listed as follows
1 Representation: the representation is a common search component for all metaheuristics.
In the Evolutionary Algorithm (EA) community, particularly within Genetic Algorithms (GA), a solution is known as a chromosome, while the individual decision variables that make up this solution are termed genes The potential values that these genes can take are referred to as alleles, and the specific position of a gene within a chromosome is called a locus.
2 Population initialization: the population initialization is a common search component for all P-metaheuristics
3 Objective function: an objective function is a common search component for all meta- heuristics In the EA community, the term fitness refers to the objective function.
4 Selection strategy: the selection strategy addresses the following question: “Which parents for the next generation are chosen with a bias toward better fitness?”
5 Reproduction strategy: the reproduction strategy consists in designing suitable mutation and crossover operator(s) to generate new individuals (offsprings).
6 Replacement strategy: the new offsprings compete with old individuals for their place in the next generation.
7 Stopping criteria: this is a common search component for all metaheuristics.
The following deal with the specific search components for evolutionary algorithms: selection, reproduction, and replacement.
The selection mechanism is a crucial component in evolutionary algorithms (EAs), operating on the principle that individuals with better performance have a greater likelihood of being chosen as parents This process ensures that the most fit individuals contribute to the next generation, enhancing the overall effectiveness of the algorithm.
Selection pressure encourages populations to evolve towards more effective solutions, while also allowing for the retention of less favorable individuals, as they may contribute valuable genetic material The strategy employed in selection plays a crucial role in determining which individuals are chosen for reproduction and the number of offspring each selected individual can produce.
After selecting individuals to serve as parents, the reproduction phase involves applying variation operators like mutation and crossover to generate new offspring.
Mutation operators are unary operators that apply small changes to individual members of a population In contrast, crossover operators combine characteristics from two parent individuals to create offspring The design of crossover operators is influenced by the representation used, distinguishing them from unary operators like mutation While S-metaheuristics rely solely on unary search operators, crossover techniques have evolved independently within the evolutionary computation community.
The replacement phase involves selecting survivors from both parent and offspring populations With a constant population size, this process enables the removal of individuals based on a specific selection strategy.
PSO algorithm is also stochastic P-metaheuristic inspired from swarm intelligence [75, 76,
Conclusion
This chapter provides essential background on barrier coverage issues in wireless sensor networks (WSNs), including an overview of WSNs, sensors, and sensor nodes It discusses the challenges associated with barrier coverage in WSNs and defines key terms related to these problems Furthermore, it outlines optimal problem concepts, approximate algorithms for addressing these issues, and the theoretical foundations of heuristic and metaheuristic algorithms, which will be referenced throughout this dissertation.
MINIMAL EXPOSURE PATH PROBLEMS IN OMNI-DIRECTIONAL
This chapter explores the MEP problem within omni-directional sensor networks, focusing on mobile sensor networks and addressing the MEP problem under a probabilistic coverage model in wireless sensor networks (WSNs) with efficient solutions.
Research results in this Chapter have been published at [1, 2, 5] in PUBLICATIONS list.
Minimal exposure path problem in mobile wireless sensor networks
Motivations
Mobile wireless sensor networks (MWSNs) have garnered significant interest due to their diverse applications An MWSN comprises mobile sensor nodes, each equipped with a locomotive unit, allowing them to move post-deployment These sensors can be mounted on mobile platforms, such as robots, enabling them to reach specific areas effectively.
MWSNs are extremely valuable in situations where traditional static wireless sensor networks
Deployment mechanisms for Sensor Wireless Networks (SWSNs) may be inadequate in hostile environments where manual or air-dropped sensor placement is unfeasible Meanwhile, Mobile Wireless Sensor Networks (MWSNs) are crucial for enhancing homeland security, as sensors can be integrated into various vehicles such as subway trains, taxis, police cars, fire trucks, and boats, or carried by personnel including police officers and firefighters.
The coverage of a Mobile Wireless Sensor Network (MWSN) is influenced by both initial network configurations and the mobility patterns of the sensors By leveraging the mobility of sensor nodes, MWSNs can enhance coverage quality, extend network lifetime, and optimize resource utilization and relocation effectively.
The MEP problem in SWSNs has been extensively studied by the academic community.
Despite the significance of the MEP problem, it has not been thoroughly investigated in Mobile Wireless Sensor Networks (MWSNs) Recognizing the benefits of MWSNs, we are pioneering the establishment of the MEP problem within this context, introducing what we term the Mobile MEP (MMEP) problem.
The MMEP problem is transformed into a numerical functional extreme problem, presenting greater challenges than the conventional MEP problem in SWSNs due to its high dimensionality, non-linear objective function, and the mobility of sensors To address these complexities, we propose two effective metaheuristic algorithms designed specifically for solving the MMEP problem.
Preliminaries and problem formulation
We have used attenuated coverage model which can be defined as follows [36]:
The attenuated disk coverage model is represented by the equation \( f_d(s_i, l) = C_d \lambda(s_i, l) \), where \( f_d(s_i, l) \) denotes the sensing function and \( d(s_i, l) \) indicates the Euclidean distance from sensor \( s_i \) to a target point at location \( l \) In this model, \( \lambda \) is the attenuation exponent and \( C \) is a constant Additionally, the truncated version of the model is defined as \( f_d(s_i, l) \) where \( d(s_i, l) \leq r_1 \) is expressed as \( e^{-\alpha (d(s_i, l)/r_1 - 1)^\beta} \) for distances \( r_1 < d(s_i, l) \leq r_2 \).
, (2.2) where and are constants, α β r2is called sensing range,r2− r 1 is called uncertain range Figure
2.1(b) demonstrates the truncated attenuated disk ability of the sensor.
Figure 2.1 Illustration of (a) the attenuated disk model; (b) the truncated attenuated disk model
Sensor field intensity model specifies the collaboration of sensors in the sensing field Sensing intensity measured at target point from all sensors at a time is:l t
X i=1 f d s( ( i( ) ))t , l (2.3) whereN is the number of sensor nodes in the sensor network, i.e the sensor network consists ofN sensors{s 1 , s2, , s N},
According to [36], the exposure for the object/intruder Oin the sensor field during interval [t1, t2] along the path ( ) is defined as:l t
In SWSN, sensing intensity at a specific position does not depend on time, which reduces ( ( ))I l t to ( ) only The expressionI l d ( )l t dt d can be replaced by d , which represents a path element.t l
To avoid ambiguity, we denote the intruder trajectory as ℘ instead of ( ) Then, Equationl t
(2.4) can be rewritten as follow:
Equation (2.5) indicates that the exposure of an object or intruder O along path ℘ is determined solely by the geometric characteristics of ℘, rather than by time This equation can be understood as representing the total accumulation of sensing intensity across all points along path ℘.
In Mobile Wireless Sensor Networks (MWSN), Equation (2.5) is not suitable for calculating exposure value for several reasons Firstly, the sensing intensity at a specific location varies over time Secondly, it is anticipated that the exposure value of an object or intruder increases as the duration of detection through the sensor network extends.
In sensing fields, the duration of an object's presence is significant When considering a freeze time interval, the exposure of the object becomes zero, which is unrealistic Instead, exposure should be calculated as the cumulative sensing intensity across various states of the intruder In static sensing fields, an intruder's position defines its state, while in mobile sensor networks, time serves as the indicator of the intruder's state Therefore, we propose a new method for calculating exposure.
MWSN with the constant velocity of intruder as follows:
Here, we assume that the intruder always enters the sensing field at = 0 and exits at = t t T Combining Equation(2.3) and Equation (2.6) we have equation Equation (2.7)
The Mobile Multi-sensor Exposure Path (MMEP) problem involves N mobile sensors, S = {s1, s2, , sN}, which are randomly deployed within a designated Region of Interest (ROI) Each sensor, denoted as si, has an initial location i0 and a position at time t represented as si(t) These sensors move along predetermined trajectories at a constant speed An intruder, starting from a source point and aiming to reach a destination point, can only remain within the sensing field for a time interval [0, T] The intruder's speed is capped at a maximum value, and it is assumed that the intruder will always move at this maximum speed to minimize exposure time within the sensing area The primary objective of the MMEP problem is to identify a penetration path ℘ from the starting point B to the endpoint E that results in the lowest possible exposure value along the path.
W L, : the width and the length of the ROI i.
The distance between two consecutive genes except the last two genes is equal to the subinterval ∆ s
To initialize individuals with specific characteristics, a random method is utilized, where each gene is represented as a point on a right arc of a circle This circle is centered at the previous gene and has a radius of ∆, ensuring that all points remain within the sensor field An example process of creating an individual is illustrated in Figure 2.18, starting with a point that is positioned at a defined distance from the first gene.
A path is established between the initial gene point (0, yB) and a randomly selected point (x2, y2) on the left boundary, ensuring divisibility by ∆ Subsequently, each new gene is generated as a random point within the right arc of a circle centered at the previous gene, with a radius of ∆ This process continues until a right arc intersects the right boundary of the region of interest (ROI) For instance, as illustrated in Figure 2.18, the circle of gene (x8, y8) intersects the right boundary, leading to the selection of a random point (x9, y9) If the distance between the current point and the destination exceeds ∆, additional genes representing boundary points are added to maintain this distance; otherwise, the gene (s L, yE) is included, concluding the process As shown in Figure 2.17, since the distance between (x9, y9) and (L, yE) is greater than ∆, a new gene (x10, y10) is added, forming a path represented by red lines from B to the corresponding phenotype E of the individual The initialization method is detailed in the accompanying algorithm.
The width and length of the sensor field :< W L,
The coordinate of source point: B , y(0 B)
The coordinate of destination point: E L, y( E)
An individual indi= [(0, yB) (, x1, y1), , L, y ( E)] begin
LetG x( G, yG) be the latest gene added to indiandCG is the right half circle centered atGwith radius ∆ ;s indi.add B( );
GenerateR , y(0 R) on the left boundary with distance E, R( ) ∆ ;s while distance G, R >( ) 0do indi.add (0, yG + (yR > yG?∆ : ∆ ))s − s end while (L x− G ) = ∆> sdo
Randomly generate a point R onCG ; if R∈ yG?∆ : ∆ ))s − s end indi.add(E); end
By creating the fixed number of individuals using Algorithm 1, a population is initialized. Genetic operators
Crossover operators: two kinds of crossover operators in GA-MEP: ALX−α crossover and
ALX α− is a modified crossover operator derived from BLX α−, traditionally used in real-coded genetic optimization problems to produce offspring Unlike the standard BLX α−, which generates gene values from intervals based on parent genes, ALX−α adapts this process by creating offspring gene values through angular calculations, tailored for the encoding scheme in GA-MEP This operator's functionality is exemplified using a pair of parent individuals, illustrating its unique approach to genetic algorithm optimization.
Figure 2.18 Execution of creating an individual in sensor field C i.xand is minimum;j
FindFk so thatFk.x > C i.xand k is minimum;
Calculatelb andub; Randomly generateϕin range [lb, ub];
Add point Ci+1(Ci.x+ ∆ coss ϕ, Ci.y+ ∆ sin ) tos ϕ child; i= + 1;i until distance(Ci+1, E < s) ∆ ;
Add point Eto child; return child;
Experimental results
The evaluation of key parameters such as the number of sensor nodes, the threshold A, and the subinterval ∆ is conducted first Subsequently, the effectiveness of the proposed algorithms is demonstrated through a comparative analysis of GB-MEP, GA-MEP, and existing algorithms.
Topology scenarios: using different numbers of sensor nodes with different distribution methods to delve into the effectiveness of our algorithms in different network topologies.
Parameter trial scenarios: experimenting different values of some important parame- ters to explore how the performance of proposed algorithms are affected.
The study encompasses 240 distinct network topologies categorized into three types based on sensor deployment methods, with each type featuring 80 unique topologies These sensors are strategically positioned within a sensor field measuring 500 x 500, utilizing three different deployment techniques.
The exponential distribution method is a straightforward technique for generating exponential variates using inverse transform sampling By utilizing a uniform variate U within the range (0, 1), the exponential variate T can be calculated with the formula T = -ln(U) / λ, where λ represents the rate parameter of the distribution In this case, with λ set to 1, the x-coordinate of a sensor is generated as an exponential variate within the interval [0, L], while the y-coordinate is also derived as an exponential variate.
Uniform distribution method: each sensor is generated by (x, y) coordinate wherex is a random double in range [0, L] andy a random double in range [0, W].
Gaussian distribution method: to generate a Gaussian distribution variate X à( expectation, σ 2 variance), we use the following equation: X = à +σV, where V is a standard normal distribution variate (zero expectation, unit variance) generated by
Table 2.8 Experimental parameters of probabilistic
Table 2.9 Experimental parameter of GA-MEP
The number of running on each instance 50
The value α in ALX-α 0.5 is derived using the Box-Muller transform, where the x-coordinate of a sensor follows a Gaussian distribution with an expectation of L2 and a variance of L6 2 It is important to note that 99.7% of the variates fall within the range [−σ, 3σ], leading to the expectation that the x-coordinate will be within [0σ, L] Any variate that falls outside this field will be regenerated, and the y-coordinate of the sensor is determined in a similar manner.
The study categorizes 80 network topologies into eight groups based on the number of sensors deployed in the field, specifically 30, 40, 50, 60, 70, 80, 90, and 100 sensors For each group, 10 distinct network topologies were generated Each topology is identified using a naming convention that includes the deployment method: "u" for Uniform distribution and "e" for Exponential distribution.
”g” for Gaussian distribution method; Num is the number of sensors andOrdis the order of the topology in its set.
The source point and the destination point are fixed to (0 150) and (500 350) A prob-, , abilistic coverage model with parameters presented in Table 2.8 is applied to every sensor.
Both algorithms utilize a common parameter known as the subinterval, while GB-MEP is defined by a single parameter, ∆ In contrast, GA-MEP involves multiple parameters, which were tested across various ranges to identify the most effective values, as detailed in Table 2.9 All experiments were conducted on a system equipped with an Intel Core i7-4720HQ processor running at 2.60 GHz and 8 GB of RAM, operating under Windows 10 and utilizing the Java programming language.
Table 2.10 Experimental Parameter of HGA-NFE
The number of running on each instance 50
To enhance analysis, we introduce the saw-tooth degree DST as a specialized measure Subsequently, we conduct experiments to test various parameter values, including subintervals and threshold A We then compare the optimal solutions and computation times of GB-MEP and GA-MEP across different topologies, maintaining consistent distances between consecutive points Finally, we evaluate the effectiveness of our algorithms by comparing GA-MEP with the existing genetic method HGA-NFE.
Previous studies on this issue reveal a common problem: MEPs often exhibit a saw-tooth wave pattern, likely due to the random initialization method used This finding contrasts with real-world scenarios where intruders typically prefer smoother, shorter paths, resulting in lower exposure values or more effective algorithms Therefore, evaluating the degree of saw-tooth in these solutions is essential for providing practical insights into algorithm performance A lower saw-tooth degree indicates that the proposed algorithm is more aligned with realistic scenarios, leading to improved accuracy To assess the saw-tooth degree of a solution or path, a specific method is employed.
The saw-tooth degree \( D_{sti} \) of a point in a two-dimensional array is calculated based on its distance to the line formed by its neighboring points, excluding the first and last points This measure reflects the path's overall smoothness, with a higher saw-tooth degree indicating a less smooth trajectory Therefore, the average saw-tooth degree across the points of a path, excluding the endpoints, serves as a key indicator of the path's smoothness.
When this measure is applied to our problem, saw-tooth degree solutions are calculated and analyzed to give some wisdom about the proposed algorithms.
Performance of GB-MEP and GA-MEP by using different subintervals
In our experiments, we analyze various subinterval values (∆) to assess their impact on the minimal exposure value of MEP, computation time, and the saw-tooth degrees of two proposed algorithms Reducing the subinterval (∆) enhances the complexity and dimensionality of the problem, leading to increased computation time; however, this approach yields more accurate results.
For this experiment, the topology used is r 50 1, the subinterval ∆ is set at different valuess which are 5, 2, 1, 0.5 and 0.2.
Table 2.10 compares the minimal exposure value, computation time, and saw-tooth degree of the GB-MEP method with the GA-MEP method Additionally, Figure 2.23(a) illustrates the variations in minimal exposure values, while Figure 2.23(b) displays the computation time, and Figure 2.23(c) shows the saw-tooth degree for both algorithms across different subinterval values (∆.s).
Table 2.11 presents a comparison of minimal exposure values (Mev), computation times (Time in seconds), and saw-tooth degrees (D st) between GA-MEP and GB-MEP across various subintervals (∆s), utilizing the topology u 50 1.
Subinterval ∆s GB-MEP GA-MEP
Mev Time(s) D st Mev Time(s) D st
As the subinterval value (∆) decreases, the exposure of the MEP and the saw-tooth degree for both GA-MEP and GB-MEP improve, leading to a smoother path This enhancement is expected, as a smaller ∆ allows for better refinement of the MEP, resulting in a reduced saw-tooth degree However, it is important to note that the computation time for both algorithms increases significantly with the decrease in ∆, which is understandable due to the heightened complexity involved.
The GA-MEP method consistently outperforms the GB-MEP method in terms of exposure values and saw-tooth degrees across all values of ∆ This demonstrates the superior effectiveness of the GA-MEP approach.
MEP has a higher stability For computation time, GB-MEP is still much faster than
M in im al E xp os ur e V al ue
C om pu ta ti on ti m e
S aw -t oo th D eg re e
Figure 2.23 The chart presents the minimal exposure values, the computation times and the saw tooth degrees of GB-MEP and GA-MEP when using different subinterval ∆s values on topology u50 1
As the subinterval decreases, the computation time for GB-MEP rises significantly faster than that of GA-MEP, indicating that the performance of GB-MEP is more heavily influenced by the complexity and dimensionality of the instance compared to GA-MEP.