Job Title: Kafka and GenAI Infrastructure Engineer Location: Fort Mill SC, Jersey City New Jersey (Hybrid) Duration: Full-time Job Summary: Incedo is seeking a highly skilled Kafka and GenAI Infrastructure Engineer to join our cutting-edge platform and data engineering team. This role involves designing, building, and maintaining scalable, secure infrastructure solutions for Apache Kafka and integrating them with Generative AI (GenAI) workloads in a cloud-native environment. Key Responsibilities: Kafka Engineering & Operations:
Design, deploy, and manage scalable Apache Kafka / Confluent Kafka clusters.
Handle Kafka topics, brokers, schema registry, connectors (Kafka Connect), and stream processing (Kafka Streams/KSQL).
Implement monitoring, alerting, and log aggregation for Kafka ecosystem (using tools like Prometheus, Grafana, Splunk, etc.).
Ensure high availability, fault tolerance, and disaster recovery of Kafka clusters.
GenAI Infrastructure:
Work with LLMs and vector databases to support GenAI use cases.
Integrate Kafka with AI/ML pipelines for real-time inference and data streaming to/from GenAI models.
Deploy and manage GenAI services (e.g., OpenAI, Hugging Face, Vertex AI, Amazon Bedrock) within secure, compliant infrastructure.
Cloud & DevOps Enablement:
Provision infrastructure using Terraform / CloudFormation / Pulumi.
Manage deployments in AWS, Azure, or GCP using Kubernetes / ECS / Lambda.
Build CI/CD pipelines (GitLab/GitHub Actions/Jenkins) for Kafka and AI/ML components.
Security & Compliance:
Ensure encryption, RBAC, and data protection policies across Kafka and GenAI workloads.
Collaborate with InfoSec and Governance teams to ensure compliance with industry regulations.
Required Skills: 5+ years of experience with Apache Kafka / Confluent Platform
3+ years of experience with cloud infrastructure (AWS, Azure, or GCP)
Familiarity with GenAI / LLM integration (OpenAI, LangChain, vector stores)
Strong understanding of stream processing and event-driven architecture
Experience with Infrastructure as Code (IaC) and CI/CD pipelines
Experience with containerization tools: Docker, Kubernetes
Proficiency in scripting: Python, Bash, or Go
Preferred Qualifications: Experience integrating Kafka with AI/ML platforms or MLOps tools
Familiarity with DataBricks, Apache Flink, or Spark Structured Streaming
Exposure to data governance and lineage tools (e.g., Apache Atlas, Collibra)
Financial industry experience is a plus
Education: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field