TY - GEN
T1 - Deep Neural Crossover
T2 - 2024 Genetic and Evolutionary Computation Conference, GECCO 2024
AU - Shem-Tov, Eliad
AU - Elyasaf, Achiya
N1 - Publisher Copyright:
© 2024 Copyright is held by the owner/author(s). Publication rights licensed to ACM.
PY - 2024/7/14
Y1 - 2024/7/14
N2 - We present a novel multi-parent crossover operator in genetic algorithms (GAs) called "Deep Neural Crossover"(DNC). Unlike conventional GA crossover operators that rely on a random selection of parental genes, DNC leverages the capabilities of deep reinforcement learning (DRL) and an encoder-decoder architecture to select the genes. Specifically, we use DRL to learn a policy for selecting promising genes. The policy is stochastic, to maintain the stochastic nature of GAs, representing a distribution for selecting genes with a higher probability of improving fitness. Our architecture features a recurrent neural network (RNN) to encode the parental genomes into latent memory states, and a decoder RNN that utilizes an attention-based pointing mechanism to generate a distribution over the next selected gene in the offspring. The operator's architecture is designed to find linear and nonlinear correlations between genes and translate them to gene selection. To reduce computational cost, we present a transfer-learning approach, wherein the architecture is initially trained on a single problem within a specific domain and then applied to solving other problems of the same domain. We compare DNC to known operators from the literature over two benchmark domains, outperforming all baselines.
AB - We present a novel multi-parent crossover operator in genetic algorithms (GAs) called "Deep Neural Crossover"(DNC). Unlike conventional GA crossover operators that rely on a random selection of parental genes, DNC leverages the capabilities of deep reinforcement learning (DRL) and an encoder-decoder architecture to select the genes. Specifically, we use DRL to learn a policy for selecting promising genes. The policy is stochastic, to maintain the stochastic nature of GAs, representing a distribution for selecting genes with a higher probability of improving fitness. Our architecture features a recurrent neural network (RNN) to encode the parental genomes into latent memory states, and a decoder RNN that utilizes an attention-based pointing mechanism to generate a distribution over the next selected gene in the offspring. The operator's architecture is designed to find linear and nonlinear correlations between genes and translate them to gene selection. To reduce computational cost, we present a transfer-learning approach, wherein the architecture is initially trained on a single problem within a specific domain and then applied to solving other problems of the same domain. We compare DNC to known operators from the literature over two benchmark domains, outperforming all baselines.
KW - combinatorial optimization
KW - genetic algorithm
KW - recombination operator
KW - reinforcement learning
KW - surrogate model
UR - http://www.scopus.com/inward/record.url?scp=85201131616&partnerID=8YFLogxK
U2 - 10.1145/3638529.3654020
DO - 10.1145/3638529.3654020
M3 - Conference contribution
AN - SCOPUS:85201131616
T3 - GECCO 2024 - Proceedings of the 2024 Genetic and Evolutionary Computation Conference
SP - 1045
EP - 1053
BT - GECCO 2024 - Proceedings of the 2024 Genetic and Evolutionary Computation Conference
PB - Association for Computing Machinery, Inc
Y2 - 14 July 2024 through 18 July 2024
ER -