Post a job

Senior Software Engineer, DevOps

GA

Location
United States
Gather AI

Job Description

About us

Gather AI is a supply chain robotics company founded by the PhDs from Carnegie Mellon’s Robotics Institute who created the world’s first provably-safe autonomous helicopter. We have developed an Inventory as a Service platform where fully autonomous drones collect warehouse inventory data at a press of a button.

This is an essential problem to solve as the warehouses we serve have typically misplaced over 10% of their inventory worth $10+ million dollars (seriously!). Their current manual techniques for taking inventory are falling down due the e-commerce boom brought on by COVID, and made worse due to the labor shortage and 70% annual staff turnover. Our drones take inventory 15x faster than humans with over 95% accuracy. We deliver this data through our web dashboard, which acts as a DVR for their warehouse where they run their inventory operation. We are the leader in this new market with proven technology. Our drones are live in a dozen warehouses and have scanned over 150k pallet locations.

We are a pure-software robotics company and our key innovation is the world’s only autonomy and machine learning engine that can solve this problem with commodity hardware in GPS-denied environments. That means we avoid all of the hardware development pitfalls of traditional robotics companies and we can scale 10x faster. The robotics industry is starting to enter its “Google era,” and we are leading the charge.

About You

You are a detail oriented, self-directed person that enjoys creating infrastructure-as-code. You are excited about the prospect of working across a broad array of DevOps concerns, including automating deployment and scaling of traditional web architectures, ML pipelines for our AI systems. You will also help build and manage CI/CD pipelines and monitoring and observability infrastructure, and keep our infrastructure secure.

Maybe you’ve worked on big projects before/for a big company, or perhaps many small consulting projects where standard infrastructure was needed, or maybe even a startup where you were turning ideas into working software platforms. You are ready for a fresh challenge - to be the person that defines what devops and deployment looks like at a fast-growing, AI- and robotics-centric company. You love test-driving new technologies, and you like the challenge of incorporating them into your organization in a secure, sustainable way.

Joining a startup, you are excited to collaborate with our customers and across our team to understand their needs and have a profound positive impact for us and our customers.

What You’ll Do

  • Launch or own specific technology or process initiatives within our organization.
  • Collaborate across teams, and with our customers when the opportunity arises.
  • Identify and implement containerization, networking, and security best practices for our web and ML back-end applications.
  • Help us scale up our ML pipeline packaging by improving how we distribute the inference workload to multiple nodes.
  • Ensure the reliability and observability of our pipelines by introducing monitoring, metrics, and logging tools.
  • Increase our development velocity by leveraging containerization, infrastructure-as-code, and modern CI/CD practices.
  • Create tools, automation scripts and processes to manage our ML models and our datasets.
  • Collaborate with development and operations teams to integrate security practices into the software development lifecycle (SDLC), including code reviews, vulnerability assessments, and automated security testing.
  • Implement and maintain security controls and best practices for infrastructure as code (IaC), containerized environments, and continuous integration/continuous deployment (CI/CD) pipelines to ensure the security of deployed applications and infrastructure.

What You’ll Need

  • BS in Computer Science/Engineering or equivalent technical experience.
  • 5+ years of internet technology work experience, as a programmer or infrastructure-as-code developer.
  • 2+ years experience working with production infrastructure-as-code technologies (e.g. AWS CDK, Terraform, Pulumi, etc.)
  • Comfortable with cloud technologies; cloud compute, storage, monitoring, and networking.
  • Experience in implementing secure design principles and working within industry's compliance standards (PCI-DSS, ISO, SOC 2, GDPR, etc.)
  • Strong familiarity with the GitHub ecosystem and modern CI/CD practices.
  • Customer obsession! We are a customer-obsessed company. If you are not already customer-obsessed, expect to become so

Nice to Have

  • Deep knowledge and experience in at least one of the major cloud compute platforms (AWS, Azure, and/or Google Cloud) - note that we are currently multi-provider (AWS, Azure.)
  • Experience in distributed ML inference with platforms such as AWS Sagemaker, GCP Vertex, Seldon, or Kubeflow.
  • Interest and experience in building complete code-to-production pipelines.
  • Experience configuring and managing Kubernetes clusters.
  • Familiarity with ML architectures and lifecycle, especially in computer vision with deep learning

Compensation and Benefits

  • Compensation package will include equity
  • Comprehensive medical, dental, vision and life insurance
  • Very flexible schedule
  • Unlimited PTO and flexible scheduling
If this sounds like a good fit we’d love to meet you. Robotics is the future and we’re leading the charge with our software-only business model. Come help us change the world!

Apply for this job

Expired?

Please let Gather AI know you found this job with RemoteJobs.org. This helps us grow!

About the job

May 1, 2024

Full-time

  1. US United States

More remote jobs at Gather AI

RemoteJobs.org mascot