Overview:
Our core purpose is “We are curators of unique brands, bringing elevated food and beverage experiences to Canadians.”
For more information, please visit our website at: http://www.treeoflife.ca.
Tree of Life Canada ULC is an Employer who strives to provide an inclusive work environment that involves everyone and embraces the diverse talent of its people. We are committed to meeting the needs of persons with disabilities. If selected for an interview, we will be happy to work with you to ensure your interview is accessible and accommodation is provided. When your interview is being scheduled, please advise the Recruiter of how we might be able to support your participation.
** NOTE: Reference checks will be conducted for potential candidates and the information collected will be used in making the final hiring decision
Primary Responsibilities:
Are you interested in exploring limitless opportunities for personal and professional development, all while contributing to meaningful work within a supportive and collaborative environment? Does change excite you? Would you like to be part of the team that's redefining how we achieve our goals? At Tree of Life Canada, we are transforming our approach to development, and we want you to play a pivotal role in it as our Lead Data Engineer! If you thrive in a fast-paced, agile environment where you'll wear different hats and embrace new and exciting challenges every day, we would love to have a conversation with you. This role can be done either full remote or office hybrid.
Essential Functions:
- Support and participate in food safety programs including SQF (Safe Quality Food)
-
Data Architecture: Design and maintain data architecture and database systems to ensure efficient data storage and retrieval, supporting the organization’s strategic business goals.
-
Data Pipeline Development: Develop and maintain ETL (Extract, Transform, Load) processes to collect, process, and transform data from various sources into usable formats.
-
Data Integration: Integrate data from different sources, including databases, APIs, and external data feeds, into a unified data platform.
-
Data Modeling: Create and manage data models to support analytics, reporting, and business intelligence needs.
-
Performance Optimization: Optimize data pipelines and queries for maximum efficiency and performance.
-
Data Quality Assurance: Implement data quality checks and validation procedures to ensure the accuracy and reliability of data.
-
Data Governance: Enforce data governance policies and best practices to ensure data security, privacy, and compliance with regulations.
-
Collaboration: Collaborate with business leaders, analysts, and other stakeholders to understand their data requirements and deliver data solutions that meet their needs.
-
Technology Evaluation: Stay current with emerging technologies and evaluate their potential for improving data engineering processes.
-
Troubleshooting: Identify and resolve data-related issues, such as data discrepancies, performance bottlenecks, and system failures.
-
Scalability: Ensure that data infrastructure and pipelines are scalable to handle increasing data volumes.
-
Security: Implement security measures to protect sensitive data and ensure compliance with data security standards.
-
Continuous Improvement: Continuously evaluate and enhance data engineering processes and infrastructure for greater efficiency and reliability.
-
Vendor Management: Manage relationships with data-related vendors and third-party service providers as needed.
-
Training and Development: Provide training and development opportunities for team members to enhance their skills and knowledge.
Minimum Requirements, Qualifications, Additional Skills, Aptitude:
- 3 to 5 years of experience with a degree/diploma in Computer Science, Data Science, Engineering or other related program.
-
Cloud-Based Data Engineering: Proven experience in designing and building data engineering solutions in a cloud environment, with a focus on AWS.
-
AWS Implementation: Hands-on experience implementing data pipelines, data lakes, and data warehouses on AWS.
-
Cost Management: Experience in optimizing data engineering solutions for cost-effectiveness on AWS, utilizing features like AWS Cost Explorer and Cost Allocation Tags.
-
Monitoring and Optimization: Proficiency in monitoring and optimizing data pipelines and resources on AWS for performance and cost-efficiency.
-
Data Lake Management: Knowledge of best practices for managing and organizing data within an AWS data lake.
-
Serverless Data Processing: Experience in developing serverless data processing solutions using AWS Lambda and related services
-
AWS Services: Proficiency in Amazon Web Services (AWS) cloud services, including AWS Glue, AWS Lambda, Amazon EMR, Amazon Redshift, Amazon S3, Amazon Managed Workflows for Apache Airflow, AWS Glue, AWS Step Functions, Amazon Athena, Amazon SageMaker and AWS Data Pipeline.
-
Cloud Computing: Strong understanding of cloud computing principles, including scalability, elasticity, and cost optimization in a cloud environment.
-
Data Lake Architecture: Ability to design and implement data lake architectures on AWS, often using Amazon S3 as the central storage repository
-
AWS Security: Understanding of AWS security best practices, including identity and access management (IAM), encryption, and security groups.
-
Data Integration: Ability to integrate Power BI with various data sources, including cloud-based and on-premises databases, data warehouses, and APIs.
-
Data Governance: Knowledge of data governance practices and tools within Power BI to ensure data accuracy and compliance.
-
Collaboration: Proven ability to collaborate with cross-functional teams, including analysts, and business stakeholders, to deliver data solutions in the AWS and Power BI environments