Job Description
5+ years Big-Data stack developer
Solid Big-Data Technologies hands on working experience
Strong hands-on experience of programming language like Java, Python, Scala, Spark
Good command and working experience on Hadoop/MapReduce, HDFS, Hive, Hbase, No-Sql Databases
Hands on working experience in analyzing source system data and data flows, working with structured and unstructured data
Hands on working experience of data processing at scale with event driven systems, message queues (Kafka/Flink/Spark Streaming)
Hands on working experience on any of the data engineering/analytics platform (Hortonworks/Cloudera/MapR), Hortonworks preferred
Certification on Hortonworks or Cloudera would be a big plus
Experience building data pipelines for structured/unstructured, real-time/batch, events/synchronous/asynchronous using MQ, Kafka, Steam processing
Experience gathering and processing raw data at scale (including writing scripts, web scraping, calling APIs, write SQL queries, etc.)
Data Warehouse experience with Apache Kylin, Apache Nifi, Apache Airflow, and Kylo
Strong technical, analytical, and problem-solving skills
Strong preference for someone with Amazon Web Services (AWS) services
Strengthen the Data engineering team with Big Data solutions running on AWS/Azure/GCP
Experience in data warehouse design and best practices
Establish DevOps processes for marshalling big data work products from development to production
Strong organizational skills, with the ability to work autonomously as well as in a team based environment.
Exposure to various ETL and Business Intelligence tools
Must be very strong in writing SQL queries
Requirements
.
- Salary
0-0 (Annual)
- Experience 0-1 year(s)
- Positions 1
- Industry /
- Role
Apply