We are looking for an experienced Data System Engineer. In our team we are not only keeping our Big Data (Hadoop eco-system) up and running but we constantly try to step into development by creating our own Batch and Streaming applications as well as helping other teams and stakeholders to improve/tune their applications. We also operate in a cross functional team that consists of system engineers and developers and operates in Scrum style. All the tasks are planned for two weeks sprints.
Your responsibilities:
- Build, maintain and optimize the PAYBACK BigData platform (MapR / HPE Ezmeral Data Fabric) and the Confluent Kafka Cluster setup.
- Monitoring, alerting, troubleshooting problems and optimization are part of the duty.
- You will be involved in DATA and process migration to Google Cloud.
- Research and Development on tools and technologies around DATA is part of team activities.
- Actively improving ansible automation.
Your Profile:
- You have a Bachelor/Master-Degree in IT or a relevant work experience.
- Experience and good understanding of Confluent Kafka.
- Experience and good understanding of Hadoop eco-system (Spark, Hive, Hue, Livy, Yarn, Zookeeper, etc.) .
- Knowledge of MapR / HPE Ezmeral Data Fabric is an advantage.
- Outstanding know-how in Linux administration/maintenance.
- Good knowledge in Ansible automation.
- You are curious and have high interest increasing your know-how in new technologies.
- Experience in analytical tools like JupyterHub.
- Experience in containerization is an advantage (Docker, OpenShift).
- Experience in further database technologies (PostgreSQL, Redis, Cassandra) is an advantage.
- Experience in Cloud Hyperscalers (GCP, AWS, etc.) is an advantage.
- Your focus relies on service and customers.
- You are working self-reliant and autonomous within the team.
- Good communication skills in English and please send English CV.