Agentic AI Security Engineer - New York
Why join this team?
Imagine working in a dynamic environment where your expertise directly influences the safety and security of cutting-edge AI technologies. As an Agentic AI Security Engineer, you will have the opportunity to:
- Innovate and Lead: Develop and implement advanced security controls for agentic AI systems, large language models (LLMs), and AI pipelines.
- Collaborate and Create: Work closely with research, product, and engineering teams to embed robust safeguards into every stage of AI development.
- Stay Ahead: Engage with the latest research in AI security, safety, and interpretability, ensuring that your skills and knowledge remain at the forefront of the industry.
- Competitive Rewards: Enjoy a competitive salary, equity, and a comprehensive benefits package.
- Flexible Work Environment: Benefit from a hybrid work model, with an office located in the vibrant heart of New York City.
What you’ll be doing:
- Conduct adversarial testing, red-teaming, and security evaluations of AI models and multi-agent environments.
- Identify risks such as prompt injection, data poisoning, model extraction, and unintended behaviors.
- Build security automation and monitoring into AI deployments.
- Partner with engineering teams to embed robust safeguards into agentic workflows.
What we’re looking for:
- 3–5 years of experience in security engineering, ideally with exposure to machine learning or AI systems.
- Understanding of LLMs, agent frameworks, and AI-driven applications.
- Hands-on experience with adversarial ML, red-teaming, or secure system design.
- Strong programming skills in Python (preferred) or other relevant languages.
- Familiarity with cloud-native AI infrastructure (AWS, GCP, Azure) and containerized environments.
- Curiosity, creativity, and the ability to think like both an attacker and a defender.