The vacancy provides clear responsibilities and tech stack but lacks compensation details and company information.
Job description
Seeking a Middle Data Engineer to develop and optimize ETL flows and data quality systems, with experience in Hadoop, Clickhouse, and Spark.
Responsibilities
- Develop and optimize ETL flows between Hadoop and Clickhouse.
- Integrate data quality control systems into applications.
- Automate routine operations in application attribute management.
- Conduct experiments to enhance Clickhouse computation performance.
Requirements
- 2+ years of experience with databases (Hadoop, Clickhouse, Greenplum).
- Deep understanding of Spark, Hive, MapReduce.
- Advanced SQL skills.
- Experience with Kafka, Flink is a plus.
- Experience in setting up and optimizing Clickhouse.
- Familiarity with CI/CD tools (Jenkins, Bitbucket, Nexus) and DevOps.