A career at our company is an ongoing journey of discovery: our around 52,000 people are shaping how the world lives, works and plays through next generation advancements in healthcare, life science and performance materials. For more than 350 years and across the world we have passionately pursued our curiosity to find novel and vibrant ways of enhancing the lives of others.
Your Role: As a senior Developer, you will be part of the IT Application management team that supports data ware house systems using Python programming.
Will have a role in level 2 & 3 production support of existing python programs, business process, communications, map issues, etc.
As a member of the Dataware house team you will work on prioritized incidents, enhancements, projects, code deployments, application upgrades, and performance testing support.
Understanding of methodology, design, specifications, programming, delivery, monitoring, and support standards. Involved in strategic planning activities and researches emerging technology and tools, and ETL standards for use in effective existing or new data marts
- Implementing, maintaining and improving ETL/DDL processes using python programming
- Working with the various internal teams to deliver various analytical needs by building both large and small custom pieces
- Support developing and maintaining an AWS Cloud platform that will support Big Data processing and analytics
- Partner with the Data Science team to investigate and implement advanced statistical models and machine learning pipelines
Bachelor's degree in computer science or related discipline required.
- 5+ years of experience in architecting, building and maintaining software platforms and large-scale data infrastructures in a commercial or open source environment
- Big Data and Data Governance Expertise
- Experience with integration of data from multiple data sources, knowledge of various ETL techniques and frameworks
- Experience building and optimizing 'big data' data pipelines, architectures and data sets.
Essential and Critical Skills
- Excellent knowledge of Python and pandas/numpy libraries
- Comfortable and Hands on experience with AWS cloud (S3, EC2, EMR, Redshift, etc.)
- At least 3 years' experience in Hadoop MapReduce or other big data technologies and pipelines (Hadoop, Spark/pyspark, R, MapReduce, NoSQL, RDBMS, etc.)
- Good communications – provide project updates.
What we offer: With us, there are always opportunities to break new ground. We empower you to fulfil your ambitions, and our diverse businesses offer various career moves to seek new horizons. We trust you with responsibility early on and support you to draw your own career map that is responsive to your aspirations and priorities in life. Join us and bring your curiosity to life!
Curious? Apply and find more information at https://jobs.vibrantm.com