Calix is where passionate innovators come together with a shared mission: to reimagine broadband experiences and empower communities like never before. As a true pioneer in broadband technology, we ignite transformation by equipping service providers of all sizes with an unrivaled platform, state-of-the-art cloud technologies, and AI-driven solutions that redefine what’s possible. Every tool and breakthrough we offer is designed to simplify operations and unlock extraordinary subscriber experiences through innovation.
Calix is seeking a highly skilled ML Ops Engineer with hands-on experience with GCP to join our cutting-edge AI/ML team. In this role, you will be responsible for building, scaling, and maintaining the infrastructure that powers our machine learning and generative AI applications. You will work closely with data scientists, ML engineers, and software developers to ensure our ML/AI systems are robust, efficient, and production ready.
This is a remote-based position that can be located anywhere in the United States or Canada.
Key Responsibilities:
Design, implement, and maintain scalable infrastructure for ML and GenAI applications.
Deploy, operate, and troubleshoot production ML pipelines and generative AI services.
Build and optimize CI/CD pipelines for ML model deployment and serving.
Scale compute resources across CPU/GPU/TPU/NPU architectures to meet performance requirements.
Implement container orchestration with Kubernetes for ML workloads.
Architect and optimize cloud resources on GCP for ML training and inference.
Set up and maintain runtime frameworks and job management systems (Airflow, KubeFlow, MLflow).
Establish monitoring, logging, and alerting for ML system observability.
Collaborate with data scientists and ML engineers to translate models into production systems.
Optimize system performance and resource utilization for cost efficiency.
Develop and enforce MLOps best practices across the organization.
Qualifications:
Bachelor's degree in computer science, Information Technology, or a related field (or equivalent experience).
8+ years of overall software engineering experience.
3+ years of focused experience in MLOps or similar ML infrastructure roles.
Strong experience with Docker container services and Kubernetes orchestration.
Demonstrated expertise in cloud infrastructure management, preferably on GCP (AWS or Azure experience also valued).
Proficiency with workflow management and ML runtime frameworks such as Airflow, Kubeflow, and MLflow.
Strong CI/CD expertise with experience implementing automated testing and deployment pipelines.
Experience with scaling distributed compute architectures utilizing various accelerators (CPU/GPU/TPU/NPU).
Solid understanding of system performance optimization techniques.
Experience implementing comprehensive observability solutions for complex systems.
Knowledge of monitoring and logging tools (Prometheus, Grafana, ELK stack).
Proficient in at least two of the following: Shell Scripting, Python, Go, C/C++
Familiarity with ML frameworks such as PyTorch and ML platforms like SageMaker or Vertex AI.
Excellent problem-solving skills and ability to work independently
Strong communication skills and ability to work effectively in cross-functional teams.
The base pay range for this position varies based on the geographic location. More information about the pay range specific to candidate location and other factors will be shared during the recruitment process. Individual pay is determined based on location of residence and multiple factors, including job-related knowledge, skills and experience.
San Francisco Bay Area:
0 - 0 USD AnnualAll Other US Locations:
As a part of the total compensation package, this role may be eligible for a bonus. For information on our benefits click here.

.png)
