Job Description
- Translate business requirements into data models that are easy to understand and used by different disciplines across the company - Design, implement and build pipelines that deliver data with measurable quality under the SLA - Assemble large, complex data sets that meet functional / non-functional business requirements. - Identify, design, and implement internal process improvements : automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. - Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using Spark, SQL and Azure big data’ technologies. - Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics. - Recommend different ways to constantly improve data reliability and quality.
Job Requirement
- Degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field - At least 1 year of experience in data engineering - 1 year of experience in PL / SQL development - Strong analytic skills related to working with unstructured datasets. - Experience building and optimizing big data’ data pipelines, architectures and data sets. - Good to have experience on visualization tools such as PowerBI / Tableau or other BI tools.