In this position, you will join our Data Team, tasked with development and operations of company wide data warehousing solution. We work on integrations with multiple services (REST APIs, Kafka topics), optimising data flows, delivering data artifacts to downstream systems. We develop some of our tools and are always open to test new ones.
As a Junior Data Engineer you will get the chance to be responsible for various elements of our product's lifecycle - from developing new features and optimising existing data pipelines to debugging and improving production operations.
Requirements:
- Eagerness to learn new things - you will have the opportunity to learn from senior engineers AND experiment with new solutions and tools
- SQL - we use PostgreSQL and Redshift. Basic familiarity with optimisation techniques (indices, partitions), stored procedures and views is expected
- Python - oop, rest api calls. We use airflow with custom operators and custom libraries for integrations hosted in docker containers. Basic understanding of how to structure python code into modules, develop unit tests and make api calls with requests is expected
- Git - we use git for versioning and are currently reimplementing our git-flows - we expect you to be able to work with remote repositories
- English - fluent (written and spoken).
Nice to have:
- AWS cloud services familiarity
- PostgreSQL know-how
- CI/CD experience.
What you get in return:
- You will join the company, that cares about work and life balance
- Annual Bonus based on the performance review cycle
- Family Medical Insurance, Pension fund, MyBenefit system and Multisport card for CoE
- Generous Annual Leave Policy (26 days of paid leave for B2B and CoE)
- Hybrid working model (3 days from our modern office and 2 days fully remotely)
- Comprehensive Workation Policy with 30 more remote days available
- Possibility of taking two additional days of paid leave per year to dedicate to volunteering efforts.