Do you enjoy logic and tinkering with complex problems? When you see a flood of data, you don't flee, but get creative? Then you've come to the right place. As a Data Engineer (m/f/d) you will build data pipelines with which you will prepare and provide unstructured data from various sources.
Modern payment methods are something quite wonderful. Apple Pay, PayPal, credit card - the list of payment options is huge. But unfortunately, this leads to pain and confusion for consumers and retailers. You already experienced it.
We, paymenttools, are a start-up of the REWE Group and want to clean up the confusion in the world of payment traffic in Europe. And later in the whole solar system. In other words: #wesolvepayn.
As a Big Data Specialist, you lay the foundation for the work of the Data Scientists and Business Analysts with whom you work closely. You are service-oriented and quickly understand which data we are still missing and urgently need. At the same time, you never disregard aspects such as data protection and data security.
Your tasks
- You will strengthen the technical core abilities in the area of data engineering by building and maintaining data pipelines using Google Cloud services - also gladly with AWS and Azure
- You are responsible for the development up to deployment and maintenance and ensure a high data quality for production systems
- You'll develop data pipelines and architecture of new processing systems to power analytic products from a variety of data sources
- You handle the flood of structured, semi-structured and unstructured data with confidence. In doing so, you review, cleanse, and refine data
- You pull your data from different sources and systems in order to prepare it
- You can also try things out: With the help of the latest technologies in the cloud, you quickly cast new ideas into prototypes and bring them to life
Your experiences
- You have a degree in (business-) computer science, mathematics, statistics, engineering or science
- You have mastered high quality standards focused on delivering very robust and scalable software solutions
- You understand the requirements of different parties to be able to help them with the data
- You bring along a high level of reasoning skills coupled with a strong problem solving ability
- You have extensive experience with programming languages such as Python and Scala, with the Apache Big Data stack (e.g. Spark), and knowledge in cloud infrastructure and services (e.g. BigQuery)
- You think service-oriented in terms of many as well as unknown consumers of processed data
- You work confidently with streaming and batch data
- You live technical innovation and are always up to date regarding new technologies
- You have a solid knowledge of SQL and relational databases as well as advanced knowledge of common architectures and processes for data processing
Nice-to-have: You have a faible for machine learning and have some initial know-how in this area.
Your benefits
- short decision-making paths and great scope for decision-making
- hybrid work from anywhere you have internet access. As long as the view is right!
- flexible working hours that fit your workflow, as long as you can make it to our meetings every now and then
- responsibility from day one
- creative freedom for your ideas, impulses and tools you like to use
- wide scope for the choice of your smartphone & notebook
- your job ticket in the VRS (+ VRR)
- Job-Rad