Agentic AI Developer

A leading venture capitalist (VC) in Silicon Valley commented that “Evergent is a diamond in the rough”. Evergent today manages over 560M+ user accounts in over 180+ countries on behalf of our customers. Globally Evergent is working with 5 of the top 10 carriers (AT&T, Etisalat, SingTel, Telkomsel, and AirTel) and 4 of the top 10 media companies (HBO, FOX, SONY, and BBC). We are not surprised by the VC comment. We have done this with an amazing global team of 300+ professionals. Evergent is recognized as the global leader in Customer Lifecycle Management for launching new revenue streams without disturbing the inflexible legacy systems.  The need for digital transformation in this subscription economy and our ability to launch services in weeks is what sets Evergent apart. We welcome you to come and meet with us.   

Job Title: Agentic AI Developer (Conversational AI & LLM Specialist)

Department: AI/ML Engineering

Location: Hyderabad, India

Experience: 4-6 Years

 

Job Summary:

We are seeking a highly motivated and experienced Agentic AI Developer to join our growing AI/ML Engineering team. In this role, you will be instrumental in designing, developing, and deploying sophisticated conversational AI solutions leveraging agentic AI frameworks and Large Language Models (LLMs). You'll focus on building intelligent agents capable of complex task execution through conversation, integrating them with various data sources, and ensuring their performance and reliability. You’ll work at the intersection of natural language processing, machine learning, and software engineering to create innovative solutions that drive business value.

 

Responsibilities:

Agentic AI Implementation: Design, develop, test, and deploy agentic AI use cases using frameworks like Strands Agents, Agent Squad, LangChain, Langgraph, and Langflow.

Conversational AI Development: Build and maintain conversational AI chatbots/agents utilizing platforms such as Amazon Lex and RASA.

LLM Integration & Optimization: Integrate Large Language Models (LLMs) into agent workflows, focusing on prompt engineering, model grounding, and fine-tuning for optimal performance.

RAG Implementation: Design and implement Retrieval Augmented Generation (RAG) pipelines to enable agents to access and utilize external knowledge sources effectively.

Vector Database Management: Work with vector databases (e.g., Pinecone, Chroma, Weaviate) to store and retrieve embeddings for efficient RAG and semantic search.

Model Context Protocol (MCP): Implement and manage MCP servers to facilitate communication and coordination between agents and LLMs.

Data Engineering & Integration: Develop data pipelines to ingest, transform, and load data into vector databases and other relevant systems using big data platforms like AWS Redshift, BigQuery, or Clickhouse.

Prompt Engineering: Craft effective prompts for LLMs to guide agent behavior and ensure accurate and contextually appropriate responses.

Model Fine-Tuning: Fine-tune pre-trained LLMs on specific datasets to improve their performance in targeted tasks.

Monitoring & Optimization: Monitor the performance of deployed agents, identify areas for improvement, and implement optimizations to enhance accuracy, efficiency, and user experience.

Collaboration: Collaborate closely with product managers, data scientists, and other engineers to define requirements, design solutions, and ensure successful deployment.

Best Practices: Adhere to coding best practices, including version control (Git), testing, and documentation.

 

Qualifications & Skills:

Education: Bachelor's or Master’s degree in Computer Science, Artificial Intelligence, Machine Learning, or a related field.

Experience: 4-6 years of experience in software development with a focus on AI/ML applications.

Programming Proficiency: Excellent proficiency in Python and its associated libraries (e.g., NumPy, Pandas).

Machine Learning Expertise: Solid understanding of machine learning concepts, algorithms, and techniques.

LLM Knowledge: Deep understanding of Large Language Models (LLMs) – architectures, capabilities, limitations, and best practices for utilization.

Agentic AI Frameworks: Hands-on experience with agentic AI frameworks such as Strands Agents, Agent Squad, LangChain, Langgraph, and Langflow.

Conversational AI Platforms: Experience with conversational AI platforms like Amazon Lex and RASA.

RAG & Vector Databases: Proven ability to design and implement RAG pipelines and work with vector databases (Pinecone, Chroma, Weaviate).

Prompt Engineering Skills: Demonstrated skill in crafting effective prompts for LLMs.

Model Fine-Tuning Experience: Experience fine-tuning pre-trained LLMs.

Data Engineering Skills: Experience working with big data platforms such as AWS Redshift, BigQuery, or Clickhouse.

Cloud Computing: Familiarity with cloud computing environments (AWS preferred).

MCP Servers: Understanding and experience implementing MCP servers for agent communication.

Excellent Communication: Strong written and verbal communication skills.

Apply