Key Responsibilities:
- experience working within Big Data Environment involves components like Azure Data Factory, DataBricks, ADLS, Azure Blob, Synapse. Excellent experience in Databricks is mandatory.
- Databricks Excellent skill in Pyspark and SparkSQL. Good understanding of delta tables, unity catalog and optimization/performance tuning in databricks.
- Strong understanding of Azure environment (PaaS, IaaS)
- Basic Understanding/Experience in GCP Google Cloud Platform can be helpful, though not mandatory.
- Exposure in developing ETL based application in Azure.
- Should have good knowledge in SQL, SQL Warehouse and No-Sql database.
- Strong knowledge in Azure Data Lake.
- Good to have Knowledge in Analytics Models and Store Procedures.
- Should have good exposure in client facing role.
Soft Skills:
- Good communication skills with a confident personality so that can gather requirement from onsite team as well as demonstrate the output to other members of the team.
- Ability to communicate well to both technical and non-technical staff.
- Excellent problem solving and analytical skills, able to handle challenging situations and act appropriately.
- Excellent learning skills – must learn and also guide other team members on their knowledge.
- Should be a self-starter; takes responsibility and ownership and is self-driven.
- Ability to work independently with minimal supervision as well as ability to work within a team.
- Good team player; commitment to high-quality output and service
How to Apply:
- First, read through all of the job details on this page.
- Scroll down and press the Click Here button.
- To be redirected to the official website, click on the apply link.
- Fill the details with the information provided.
- Before submitting the application, cross-check the information you’ve provided.