Location: Pune / Hyderabad / Gurgaon / Bangalore / Hybrid
Job Description:
Must-Have:
- Working knowledge / experience of Big Data frameworks like Hadoop, Hive and Spark.
- Hands-on experience in query languages like HQL or SQL (Spark SQL) for Data exploration.
- Data mapping: Determine the data mapping required to join multiple data sets together across multiple sources.
- Documentation – Data Mapping, Subsystem Design, Technical Design, Business Requirements.
- Exposure to Logical to Physical Mapping, Data Processing Flow to measure the consistency, etc.
- Data Asset design / build: Working with the data model / asset generation team to identify critical data elements and determine the mapping for reusable data assets.
- Understanding of ER Diagram and Data Modeling concepts
- Exposure to Data quality validation
- Exposure to Data Management, Data Cleaning and Data Preparation
- Exposure to Data Schema analysis.
- Exposure to working in Agile framework.
How to Apply:
- First, read through all of the job details on this page.
- Scroll down and press the Click Here button.
- To be redirected to the official website, click on the apply link.
- Fill the details with the information provided.
- Before submitting the application, cross-check the information you’ve provided.