Post a job

Backend Engineer, Optimized Checkout & Link Data Engineering

Stripe logo

Location
CA and US
Stripe

Job Description

Who we are

About Stripe

Stripe is a financial infrastructure platform for businesses. Millions of companies—from the world’s largest enterprises to the most ambitious startups—use Stripe to accept payments, grow their revenue, and accelerate new business opportunities. Our mission is to increase the GDP of the internet, and we have a staggering amount of work ahead. That means you have an unprecedented opportunity to put the global economy within everyone’s reach while doing the most important work of your career.

About the team

The Optimized Checkout Link team at Stripe builds best-in-class checkout experiences across web and mobile that delight consumers and streamline checkout flows for merchants. Based across North America, we're a diverse team who are deeply passionate about redefining the payment experience creating outstanding value for merchants, increasing revenue, lowering cost and growing their business. We work on Checkout, Payment Links, Elements, Payment Methods, and Link – each playing a crucial part in augmenting the economic landscape of the internet. Our days are filled with exciting challenges and collaborative problem-solving as we strive to simplify payment options, create unique business solutions and enhance checkout ease. Join us in crafting the future of digital commerce.

What you’ll do

We’re looking for people with a strong background in data engineering and analytics to help us scale while maintaining correct and complete data.

Responsibilities

  • Conceptualize and own the data architecture for multiple large-scale projects, while evaluating design and operational cost-benefit tradeoffs within systems
  • Be an advocate for data quality and excellence of our platform.
  • Create and contribute to frameworks that improve the efficacy of logging data, while working with data infrastructure to triage issues and resolve
  • Gather requirements, understand the big picture, create detailed proposals in technical specification documents.
  • Productizing data ingestion from various sources, data delivery to various destinations, and creating well-orchestrated data pipelines.
  • Optimize pipelines, dashboards, frameworks, and systems to facilitate easier development of data artifacts
  • Conduct SQL data investigations, data quality analysis and optimizations.
  • Contribute in peer code reviews, and help the team produce high quality code.
  • Mentor team members by giving/receiving actionable feedback

Who you are

We’re looking for someone who meets the minimum requirements to be considered for the role. If you meet these requirements, you are encouraged to apply. The preferred qualifications are a bonus, not a requirement.

Minimum requirements

  • Bachelor's degree in Computer Science or Engineering Master’s degree is preferred.
  • Have a strong engineering background and are interested in data
  • 5+ years of experience with writing and debugging data pipelines using a distributed data framework (Hadoop/Spark/Pig etc…)
  • Great data modeling skills, database design, relational/non-relational.
  • Very strong SQL proficiency, and preferably SQL query optimization experience.
  • Strong coding skills in Scala or Java preferably for building performance data pipelines.
  • Strong understanding and practical experience with systems such as Hadoop, Spark, Presto, Iceberg, and Airflow
  • Versed in software production engineering practices, version control, code peer reviews, automated testing, and CI/CD.
  • Excellent communication skills.
  • Experience in AWS cloud is preferred.

Apply for this job

Expired?

Please let Stripe know you found this job with RemoteJobs.org. This helps us grow!

RemoteJobs.org mascot