Home Artificial Intelligence Science, Passion, and the Way forward for Multi-Objective Optimization

Science, Passion, and the Way forward for Multi-Objective Optimization

0
Science, Passion, and the Way forward for Multi-Objective Optimization

I’d prefer to delve into your personal journey. You needed to discover an appropriate research topic in your PhD in 1996 at Tulane University. Are you able to briefly tell me the story that led you to work on evolutionary multi-objective optimization??

That is a protracted story, so I’ll attempt to be transient. After I got to Tulane for my master’s after which PhD degree in computer science, I didn’t know what topic I desired to work on. I knew I didn’t need to do software engineering nor databases. Firstly, I attempted programming languages then robotics. Each didn’t work. By chance, sooner or later, I read a paper that used genetic algorithms to unravel a structural optimization problem. I made a decision to dedicate a course task to this paper, developed my very own genetic algorithm and wrote software for evaluation. This got me very excited, as I could now see how a genetic algorithm was capable of produce good solutions to a posh optimization problem relatively easily. This excitement for evolutionary algorithms has stayed my entire life.

Nevertheless, although two professors at Tulane worked with evolutionary algorithms, I made a decision to go together with a robotics professor. He didn’t know much about evolutionary computing, and neither did I, but we decided we could work together. As such, he couldn’t help me find an appropriate topic. Professor Bill Buckles, who worked with evolutionary algorithms, really helpful me to work with multi-objective optimization as not many individuals had been using algorithms in that domain. After in search of related papers, I discovered my PhD topic. Serendipitously, all of it got here together without being planned. I feel that many great things come together by serendipity somewhat than being planned.

Are you able to elaborate on what sparked your interest in evolutionary computing?

There’s a big difference between classical optimization and using evolutionary algorithms. Classical optimization mostly is determined by math and calculus, whereas evolutionary algorithms are inspired by natural phenomena. It fascinates me how nature has adapted the species in other ways, just aiming for survival, and the way this could be such a robust tool to enhance the mechanisms of a specific individual. With evolutionary algorithms, we simulate this process, albeit a rough, low-quality version of what happens in nature.

Evolutionary algorithms appear to have a simplistic framework, mirroring intricate natural phenomena, which paradoxically yields exceptional problem-solving capabilities. In my pursuit to know why it’s that they were so good, I’m still puzzled. I even have read many papers related to natural evolution. I attempted to follow up just a little bit on findings in kind of popular science magazines, not technical things.

The connection between algorithmic and natural evolution has at all times fascinated me. If circumstances permitted — the knowledge, time, and skills — I’d devote the remainder of my profession to trying to know how they operate.

How has the multi-objective optimization field evolved?

Though the domain of multi-objective optimization is comparatively narrow, my journey began in an era when opportunities were abundant on account of the limited variety of researchers. This allowed me to explore a various array of topics. While the landscape has evolved, I’ve observed that despite a proliferation of papers, a definite perspective remains to be lacking.

Why is this attitude lacking?

Researchers are somewhat hesitant to embrace difficult problems and push the boundaries of research topics. Moreover, we struggle to offer robust explanations for our methodologies. We’re still not daring to go to difficult problems, to difficult research topics, and we’re still not capable of explain lots of the things we now have done. We’re well-equipped with techniques for specific problems, yet we lack a deeper comprehension of those techniques’ underlying principles. Most individuals deal with proposing, not on understanding. This realization has prompted a shift in my focus.

What role do you are taking on this development?

As I’ve matured, my priority has shifted from mere proposition to understanding. I feel that if nobody else undertakes this task, it falls upon us to accomplish that. While it’s a difficult endeavour to dissect and understand mechanisms and reasons behind algorithmic efficacy, I consider this pursuit essential for real scientific advancement. You might have only two or three methods for an issue somewhat than 200. If there isn’t a method to classify all these methods, one cannot justify a latest tool, and I don’t think it makes much sense to proceed on this direction. In fact, people will keep producing, and that’s high-quality. But when we lack understanding, I feel we are going to find yourself with a field with no future. Ultimately, my objective is to direct my efforts toward grasping existing tools before determining the necessity for novel ones.

How can we move towards more understanding of existing methods?

We must always spend more time trying to know the things we have already got. Then, we are able to assess what we actually need. We must always work based on the domain’s needs as a substitute of the will to have more publications. If we don’t have a tool that does this, then let’s work on developing it. Then, research needs to be moving more on this direction of need somewhat than within the direction of manufacturing numbers.

Are these questions centered around understanding why specific algorithms work?

Well, it’s not only about why they work. The query of why certain algorithms work is undoubtedly crucial, but our inquiries shouldn’t be limited to simply that. A critical aspect to delve into is methods to best match algorithms to applications. When presented with multiple algorithms, practitioners often grapple with deciding which one is perfect for a specific application, whether it’s for combinatorial or continuous optimization. The anomaly lies in discerning the perfect scenarios for every algorithm.

Today, while we do not need algorithms designed for specific tasks that don’t require further characterization, it’s equally vital to know and maybe categorize general algorithms. We must always aim to extract more details about their operation and evaluate whether they really are universally applicable or in the event that they needs to be tied to specific tasks.

Beyond algorithms, there are tools and techniques corresponding to scalarizing functions, crossover operators, mutation operators and archiving techniques. There’s a plethora of all of those. Yet, only a select few are commonly used, actually because they’ve been employed historically somewhat than on account of an intrinsic understanding of their efficacy. We must always be addressing questions like: “Why use one method over one other?” It’s these broader, nuanced inquiries that our domain must deal with.

Are you able to explain how evolutionary algorithms function in multi-objective optimization?

Evolutionary algorithms initiate with a group of solutions, normally generated randomly. These solutions initially possess low quality, but through the choice process, they step by step evolve towards the Pareto front. Nevertheless, it’s vital to notice that while a Pareto front is generated, users typically don’t require all solutions inside it. Then, a number of or just one solution is chosen. But choosing the best solution on the Pareto front will not be optimization, but is as a substitute decision making.

With decision-making, a subset or perhaps a single solution is chosen from the Pareto front based on the user’s preferences. Determining user’s preferences could be straightforward in the event that they have a transparent trade-off in mind, but when preferences are uncertain, the algorithm generates several possibilities for users to guage and choose from. This diverges from optimization and delves into decision-making. Thus, in multi-objective optimization, there are three distinct stages: modeling, optimization, and decision-making.

I primarily deal with the optimization aspect. Other researchers, particularly in operations research, delve into decision-making, and a few mix each. These interactive approaches involve running the optimizer for a number of iterations after which in search of user input on the specified direction, generating solutions based on the user’s preferences. These interactive methods could be effective, but crafting concise and meaningful user queries is crucial to forestall overwhelming them.

In an earlier outing, you mentioned an important criterion based on which you choose PhDs is their passion. How do you assess passion?

Ideally, students are passionate but are also excellent programmers and mathematicians. Unfortunately, students with all these skills are rare, and a balance between these needs to be found. One could say this can be a multi-objective optimization problem in itself. Passion weighs heavily in comparison with other traits and skills in my assessment.

Assessing passion could be intricate to define but more evident to acknowledge. After I encounter it, a kind of sixth sense guides me in differentiating real passion from feigned enthusiasm. One telltale sign is students who consistently transcend the scope of assigned tasks, always exceeding expectations. Nevertheless, this will not be the only indicator. Passionate individuals exhibit an insatiable curiosity, not only asking quite a few questions on their topic but in addition independently delving into related areas. They bridge concepts, linking seemingly disparate elements to their work — a necessary trait in research which involves creative connections. For me, this means a real passion for the craft. In my experience, individuals with an innate passion are likely to exhibit an affinity for probing the depths of their topic, exploring facets beyond immediate instruction. Such students possess a research-oriented spirit, not solely in search of prescribed answers but uncovering avenues to counterpoint their understanding.

The ultimate element involves leveraging and cultivating their skills. Even when a student excels primarily in passion, their other abilities might not be lacking. It’s rare to seek out a student embodying every desirable trait. More often, students excel in a specific facet while maintaining proficiency in others. As an illustration, a student might excel in passion, possess good programming skills, albeit not extraordinary, and reveal solid mathematical foundations. Striking a balance amongst these attributes constitutes a multi-objective problem, aiming to extract essentially the most from a student based on their unique skill set.

Why is passion so vital?

I recall having a number of students who were exceptional in various elements but lacked that spark of passion. The work we engaged in, because of this, felt somewhat mundane and uninspiring to me. A passionate student not only strives for their very own growth but in addition reignites my enthusiasm for the subject material. They challenge me, push me deeper into the subject, and make the collaborative process more stimulating. Then again, a student who’s merely going through the motions, focusing just on task completion without the drive to delve deeper, doesn’t evoke the identical excitement. Such situations are likely to turn out to be more about ticking boxes to make sure they graduate somewhat than an enriching exchange of data and concepts. Simply put, without passion, the experience becomes transactional, devoid of the vibrancy that makes academic collaboration truly rewarding.

You favor making a number of helpful contributions somewhat than many papers simply following a research-by-analogy approach. Since there is usually little novelty in research by analogy, should this be conducted at universities?

The query raises a fundamental consideration: the objectives of universities in research endeavours. Research by analogy actually has its place — it’s mandatory, and over time, it has incrementally pushed the boundaries of data in specific directions. As an illustration, within the context of multi-objective optimization, significant progress has occurred over the past 18 years, resulting in the event of improved algorithms. This success validates the role of research by analogy.

Nevertheless, the potential downside lies in overreliance on research by analogy, which could stifle the reception of truly revolutionary ideas. Novel ideas, when introduced, might face resistance inside a system that largely values incremental work. Consequently, a harmonious coexistence between the 2 modes of research is crucial. Institutions, evaluation systems, and academic journals should incentivize each. Research by analogy serves as a foundation for regular progress, while the cultivation of groundbreaking ideas drives the sphere forward. The coexistence ensures that while we construct upon existing knowledge, we concurrently embrace avenues resulting in unexpected territories. A future devoid of either approach could be lower than optimal; due to this fact, fostering a balanced ecosystem ensures that the sphere stays vibrant, adaptive, and poised for growth.

Do you incentivize this as well in your journal?

I do my best, nevertheless it’s difficult because it’s not solely inside my control. The final result hinges on the contributions of Associate Editors and reviewers. While I strive to not reject papers with novel ideas, it’s not at all times feasible. Unfortunately, I have to admit that encountering papers with genuinely latest concepts is becoming increasingly rare. Notably, this yr, I reviewed a paper for a conference featuring an exceptionally intriguing concept that captivated me. This stands as essentially the most remarkable discovery I’ve encountered up to now 15 years. Nevertheless, such occurrences usually are not frequent.

Computational intelligence was historically divided into evolutionary computing, fuzzy logic, and neural networks. The last decade witnessed groundbreaking developments in neural networks, particularly transformer models. What role can evolutionary computing play on this latest landscape?

I posit that evolutionary algorithms, traditionally utilized in evolving neural architectures, have potential yet to be fully harnessed. There’s a possibility of designing robust optimizers that may seamlessly integrate with existing algorithms, like Adam, to coach neural networks. There have been a number of endeavours on this domain, corresponding to the particle swarm approach, but these efforts are primarily focused on smaller-scale problems. Nevertheless, I anticipate the emergence of more complex challenges within the years ahead.

Moreover, someone I do know firmly believes that deep learning performance could be replicated using genetic programming. The thought might be described as “deep genetic programming.” By incorporating layered trees in genetic programming, the structure would resemble that of deep learning. It is a relatively uncharted territory, divergent from the standard neural network approach. The potential advantages? Possibly it would offer more computational efficiency and even heightened accuracy. But the actual advantage stays to be explored.

While there are researchers using genetic programming for classification, it’s not a widespread application. Genetic programming has often been harnessed more for constructing heuristics, especially hyper heuristics pertinent to combinatorial optimization. I speculate the limited use for singular classification problems stems from the computational costs involved. Yet, I’m hopeful that with time and technological progression, we’ll see a shift.

In summary, evolutionary computing still has vast areas to explore, be it in augmenting neural networks or difficult them with unique methodologies. There’s ample room for coexistence and innovation.

Do you perceive the neural network focus as a trend or a structural shift on account of their superior performance?

Many AI people will inform you that it’s fashionable. I’m not so sure; I feel this can be a very powerful tool, and it should be difficult to outperform deep neural networks. Perhaps in 10–15 years, it could occur, but not now. Their performance is such that I find it hard to examine any imminent rival that may easily outperform them, especially considering the extensive research and development invested on this space. Perhaps in a decade or more, we would witness changes, but presently, they seem unmatched.

Yet, AI will not be solely in regards to the tasks deep learning is thought for. There are many AI challenges and domains that aren’t necessarily centered around what deep learning primarily addresses. Shifting our focus to those broader challenges might be useful.

One vulnerability to spotlight in deep learning models is their sensitivity to ‘pixel attacks’. By tweaking only one pixel, which is commonly imperceptible to the human eye, these models could be deceived. Recently, evolutionary algorithms have been employed to execute these pixel attacks, shedding light on the frailties in neural networks. Beyond merely pinpointing these weaknesses, there’s a possibility for evolutionary algorithms to reinforce model resilience against such vulnerabilities. It is a promising avenue that integrates the strengths of each deep learning and evolutionary algorithms.

This marks the tip of our interview. Do you might have a final remark?

I’d prefer to stress that research, whatever the domain, holds charming allure for those driven by passion. Passion serves as an important ingredient for anyone dedicating their profession to research. Utilizing tools could be satisfying, but true research involves unearthing solutions to uncharted problems and forging connections between seemingly disparate elements. Cultivating interest among the many younger generation is paramount. Science always requires fresh minds, brimming with creativity, prepared to tackle progressively intricate challenges. Given the critical issues corresponding to climate change, pollution, and resource scarcity, science’s role in crafting sophisticated solutions becomes pivotal for our survival. Although not everyone could also be inclined to research, for those drawn to it, it’s a rewarding journey. While not a path to fast wealth, it offers immense satisfaction in solving complex problems and contributing to our understanding of the world. It’s a source of pleasure, pleasure, and accomplishment, something I’ve personally cherished throughout my journey in the sphere.

This interview is conducted on behalf of the BNVKI, the Benelux Association for Artificial Intelligence. We bring together AI researchers from Belgium, The Netherlands and Luxembourg.

LEAVE A REPLY

Please enter your comment!
Please enter your name here