Post a job

Senior Data Engineer

eSimplicity logo

Location
United States
eSimplicity

Job Description

eSimplicity is modern digital services company that delivers innovative federal and commercial IT solutions designed to improve the health and lives of millions of Americans while defending our national interests. Our solutions and services improve healthcare for 100+ million Americans, protect our borders, and defend our country by supporting and innovating with the Air Force, Space Force, and Navy. Our team of 200+ engineers, designers and strategists relentlessly challenge the status quos, build consensus and collaborate to deliver new solutions with an unwavering focus on the user experience from start to finish.




Responsibilities:

  • Creates new pipeline development and maintains existing pipeline, updates Extract, Transfer, Load (ETL) process, creates new ETL feature development
  • Supports software developers, database architects, data analysts and data scientists on data initiatives and ensure optimal data delivery architecture is consistent throughout ongoing projects.
  • Assembles large, complex sets of data that meet non-functional and functional business requirements
  • Builds required infrastructure for optimal extraction, transformation and loading of data from various data sources using AWS and SQL technologies
  • Builds analytical tools to utilize the data pipeline, providing actionable insight into key business performance metrics including operational efficiency and customer acquisition 
  • Works with stakeholders including data, design, product and government stakeholders and assisting them with data-related technical issues
  • Writes unit and integration tests for all data processing code.
  • Works with DevOps engineers on CI, CD, and IaC.
  • Reads specs and translate them into code and design documents.

Required Qualifications:

  • Minimum of 8 years related experience.
  • 4 years of hands-on software development experience
  • 4 years of Data pipeline experience using Python, Java and cloud technologies
  • A Bachelor’s degree in Computer Science, Information Systems, Engineering, Business, or other related scientific or technical discipline. With ten years of general information technology experience and at least eight years of specialized experience, a degree is NOT required.
  • Experienced in data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up.
  • Experienced in designing data services including API, meta data, and data catalog.
  • Experienced in data governance process to ingest (batch, stream), curate, and share data with upstream and downstream data users.
  • Familiar with work to build and optimize data sets, ‘big data’ data pipelines and architectures 
  • Familiar with work to perform root cause analysis on external and internal processes and data to identify opportunities for improvement and answer questions 
  • Analytic skills associated with working on unstructured datasets 
  • Familiar with work to build processes that support data transformation, workload management, data structures, dependency and metadata
  • Demonstrated understanding using software and tools including big data tools like Kafka, Spark and Hadoop; relational NoSQL and SQL databases including Cassndra and Postgres; workflow management and pipeline tools such as Airflow, Luigi and Azkaban; AWS cloud services including Redshift, RDS, EMR and EC2; stream-processing systems like Spark-Streaming and Storm; and object function/object-oriented scripting languages including Scala, C++, Java and Python. 
  • Flexible and willing to accept a change in priorities as necessary.
  • Ability to work in a fast-paced, team-oriented environment
  • Experience with Agile methodology, using test-driven development.
  • Experience with Atlassian Jira/Confluence.
  • Excellent command of written and spoken English.
  • Ability to obtain and maintain a Public Trust; residing in the United States

Desired Qualifications:

  • Federal Government contracting work experience.
  • Google’s Certified Professional-Data-Engineer certification, IBM Certified Data Engineer – Big Data certification, CCP Data Engineer for Cloudera
  • Centers for Medicare and Medicaid Services (CMS) or Health Care Industry experience
  • Experience with healthcare quality data including Medicaid and CHIP provider data, beneficiary data, claims data, and quality measure data.
  • Experienced in designing data architecture for shared services, scalability, and performance
eSimplicity supports a remote work environment operating within the Eastern time zone so we can work with and respond to our government clients. Expected hours are 9:00 AM to 5:00 PM Eastern unless otherwise directed by your manager.
Occasional travel for training and project meetings. It is estimated to be 5-15% per year.
Benefits:We offer highly competitive salary, full healthcare benefits and a flexible leave policy.
Equal Employment Opportunity:eSimplicity is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, religion, color, national origin, gender, age, status as a protected veteran, sexual orientation, gender identity, or status as a qualified individual with a disability.

Advice from our career coach

As an applicant for the Data Engineer position at eSimplicity, it is important to showcase your experience and expertise in data pipeline development and maintenance using Python, Java, and cloud technologies. Here are some key tips to help you stand out:

  • Highlight your hands-on software development and data pipeline experience, emphasizing your ability to assemble large, complex datasets and build infrastructure for data extraction, transformation, and loading.
  • Showcase your experience in working with stakeholders across various teams, such as data, design, product, and government stakeholders, to address data-related technical issues and deliver actionable insights.
  • Demonstrate your expertise in writing unit and integration tests for data processing code and working with DevOps engineers on CI, CD, and IaC.
  • Emphasize your familiarity with big data tools like Kafka, Spark, and Hadoop, as well as workflow management and pipeline tools such as Airflow, Luigi, and Azkaban.
  • Highlight your experience with agile methodology, test-driven development, and tools like Atlassian Jira/Confluence to showcase your ability to work in a fast-paced, team-oriented environment.

Apply for this job

Expired?

Please let eSimplicity know you found this job with RemoteJobs.org. This helps us grow!

RemoteJobs.org mascot