Home Artificial Intelligence Ramprakash Ramamoorthy, Head of AI Research at ManageEngine – Interview Series

Ramprakash Ramamoorthy, Head of AI Research at ManageEngine – Interview Series

0
Ramprakash Ramamoorthy, Head of AI Research at ManageEngine – Interview Series

Ramprakash Ramamoorthy, is the Head of AI Research at ManageEngine, the enterprise IT management division of Zoho Corp. ManageEngine empowers enterprises to take control of their IT, from security, networks, and servers to your applications, service desk, Energetic Directory, desktops, and mobile devices.

How did you initially get all for computer science and machine learning?

Growing up, I had a natural curiosity towards computing, but owning a notebook computer was beyond my family’s means. Nevertheless, due to my grandfather’s position as a professor of chemistry at a neighborhood college, I sometimes got the prospect to make use of the computers there after hours.

My interest deepened in college, where I finally got my very own PC. There, I developed a few web applications for my university. These applications are still in use today—an entire 12 years later—which really underlines the impact and longevity of my early work. This experience was a comprehensive lesson in software engineering and the real-world challenges of scaling and deploying applications.

My skilled journey in technology began with an internship at Zoho Corp. Initially, my heart was set on mobile app development, but my boss nudged me to finish a machine learning project before moving on to app development. This turned out to be a turning point—I never did get a chance to do mobile app development—so it’s a little bit bittersweet.

At Zoho Corp, we now have a culture of learning by doing. We consider that should you spend enough time with an issue, you change into the expert. I’m really grateful for this culture and for the guidance from my boss; it’s what kick-started my journey into the world of machine learning.

Because the director of AI Research at Zoho & ManageEngine, what does your average workday seem like?

My workday is dynamic and revolves around each team collaboration and strategic planning. A significant slice of my day is spent working closely with a talented team of engineers and mathematicians. Together, we construct and enhance our AI stack, which forms the backbone of our services.

We operate because the central AI team, providing AI solutions as a service to a big selection of products inside each ManageEngine and Zoho. This role involves a deep understanding of the varied product lines and their unique requirements. My interactions aren’t just limited to my team; I also work extensively with internal teams across the organization. This collaboration is crucial for aligning our AI strategy with the precise needs of our customers, that are continuously evolving. That is such an important opportunity to rub shoulders with the neatest minds across the corporate.

Given the rapid pace of advancements in AI, I dedicate a considerable period of time to staying abreast of the most recent developments and trends in the sphere. This continuous learning is important for maintaining our edge and ensuring our strategies remain relevant and effective.

Moreover, my role extends beyond the confines of the office. I even have a passion for speaking and travel, which dovetails nicely with my responsibilities. I often engage with analysts and take part in various forums to evangelize our AI strategy. These interactions not only assist in spreading our vision and achievements but additionally provide helpful insights that feed back into our strategic planning and execution.

You’ve witnessed AI’s evolution since positioning ManageEngine as a strategic AI pioneer back in 2013. What were among the machine learning algorithms that were utilized in these early days?

Our initial focus was on supplanting traditional statistical techniques with AI models. As an illustration, in anomaly detection, we transitioned from a bell curve methodology that flagged extremes to AI models that were adept at learning from past data, recognizing patterns and seasonality.

We incorporated a wide selection of algorithms—from support vector machines to decision-tree based methods—as the muse of our AI platform. These algorithms were pivotal in identifying area of interest use cases where AI could significantly leverage past data for pattern finding, forecasting, and root cause evaluation. Remarkably, lots of these algorithms are still effectively in production today, underlining their relevance and efficiency.

Could you discuss how LLMs and Generative AI have modified the workflow at ManageEngine?

Large language models (LLMs) and generative AI have actually caused a stir in the patron world, but their integration into the enterprise sphere, including at ManageEngine, has been more gradual. One reason for that is the high entry barrier, particularly by way of cost, and the numerous data and computation requirements these models demand.

At ManageEngine, we’re strategically investing in domain-specific LLMs to harness their potential in a way that is tailored to our needs. This involves developing models that should not just generic of their application but are fine-tuned to handle specific areas inside our enterprise operations. For instance, we’re working on an LLM dedicated to security, which may flag security events more efficiently, and one other that focuses on infrastructure monitoring. These specialized models are currently in development in our labs, reflecting our commitment to leverage the emergent behaviors of LLMs and generative AI in a way that adds tangible value to our enterprise IT solutions.

ManageEngine offers a plethora of various AI tools for various use cases, what’s one tool that you simply are particularly pleased with?

I’m incredibly pleased with all our AI tools at ManageEngine, but our user and entity behavior analytics (UEBA) stands out for me. Launched in our early days, it’s still a robust and vital a part of our offerings. We understood the market expectations and added an evidence to every anomaly as a typical practice. Our UEBA capability is continuously evolving and we supply forward the learnings to make it higher.

ManageEngine currently offers the AppCreator, a low-code custom application development platform that lets IT teams create customized solutions rapidly and launch them on-premises. What are your views on the longer term of no code or low code applications? Will these eventually take over?

The longer term of low-code and no-code applications, like our AppCreator, is extremely promising, especially within the context of evolving business needs. These platforms have gotten pivotal for organizations to increase and maximize the capabilities of their existing software assets. As businesses grow and their requirements change, low-code and no-code solutions offer a versatile and efficient strategy to adapt and innovate.

Furthermore, these platforms are playing a vital role in IT enabling businesses. By offering evolving tech, like AI as a service, they significantly lower the entry barrier for organizations to sample the ability of AI.

Could you share your personal views on AI risks including AI bias, and the way ManageEngine is managing these risks?

At ManageEngine, we recognize the intense threat posed by AI risks, including AI bias, which may widen the technology access gap and affect critical business functions like HR and finance. For instance, stories of AI exhibiting biased behavior in recruitment are cautionary tales we take seriously.

To mitigate these risks, we implement strict policies and workflows to make sure our AI models minimize bias throughout their lifecycle. It’s crucial to observe these models repeatedly, as they’ll start unbiased but potentially develop biases over time resulting from changes in data.

We’re also investing in advanced technologies like differential privacy and homomorphic encryption to fortify our commitment to protected and unbiased AI. These efforts are vital in ensuring that our AI tools should not only powerful but additionally used responsibly and ethically, maintaining their integrity for all users and applications.

What’s your vision for the longer term of AI and robotics?

The longer term of AI and robotics is shaping as much as be each exciting and transformative. AI has actually experienced its share of boom and bust cycles prior to now. Nevertheless, with advancements in data collection and processing capabilities, in addition to emerging revenue models around data, AI is now firmly established and here to remain.

AI has evolved right into a mainstream technology, significantly impacting how we interact with software at each enterprise and private levels. Its generative capabilities have already change into an integral a part of our day by day lives, and I foresee AI becoming much more accessible and reasonably priced for enterprises, due to recent techniques and advancements.

A vital aspect of this future is the responsibility of AI developers. It’s crucial for builders to be certain that their AI models are robust and free from bias. Moreover, I hope to see legal frameworks evolve at a pace that matches the rapid development of AI to effectively manage and mitigate any legal issues that arise.

My vision for AI is a future where these technologies are seamlessly integrated into our day by day lives, enhancing our capabilities and experiences while being ethically and responsibly managed.

LEAVE A REPLY

Please enter your comment!
Please enter your name here