Assemble large, complex data sets that meet functional / non-functional business requirements.
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL, MPP Data warehouse and Azure 'big data' technologies.
- Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
- Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
- Work with data and analytics experts to strive for greater functionality in our data systems.
- Support software developers, database architects, data analysts and data scientists on data initiatives and ensure optimal data delivery architecture throughout projects.
- Advanced working SQL knowledge and experience working with relational databases (PostgreSQL, SQL Server), query authoring (SQL/PLSQL), Stored Procedure, UDFs as well as working familiarity with a variety of databases including Massively Parallel Processing Data warehouse technologies (Netezza, Redshift).
- Good experience in one of the MPP data warehouse technologies like Netezza/Redshift, PostgreSQL.
- Experience building and optimizing data pipelines, architectures and data sets.
- Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
- Strong analytic skills related to working with large structured/unstructured datasets.
- Build processes supporting data transformation, data structures, metadata, dependency and workload management.
- Working knowledge of unix shell scripting/bash scripting
- Experience supporting and working with cross-functional teams in a dynamic environment.
- Experience in building Data-Ingestion and ETL/ELT Pipelines
- Overall 5+ years of experience with 3+ years of experience in a Data Engineer role
- Good to have experience in MS Azure Data Factory, Data Warehouse, Azure SQL Server
- Experience with data pipeline and work flow management tools
- Hands on working experience in Linux & Windows environment
- Good to have: Experience in Python programing
- Must be a good team player, Takes ownership for responsibilities
- Recognizes and respects the strengths of others in the organization
- Demonstrates a high degree of reliability, integrity, and trustworthiness
- Demonstrates strong negotiation, communication & presentation skills
Postgresql, Netezza, SQL Query, PLSQL, Data Warehousing, Unix Shell Scripting
Join us & Explore thousands of jobs