Senior Analyst AI – Data Engineering, Air Canada

Senior Analyst AI - Data Engineering, Air Canada

Company Air Canada
Job title Senior Analyst, AI – Data Engineering
Job location Toronto, ON, Canada
Type Full Time

Responsibilities:

  • Conceptualize and architect cloud-distributed data solutions in the context of AI projects with an orientation on PaaS.
  • Design, implement, deploy, test, and maintain solutions and large-scale processing systems.
  • Undertake and analyze data to determine patterns and insights that can optimize data ingestion and data presentation.
  • Support the delivery of AI projects in designing and implementing the data pipelines/data streaming. Design and develop processing pipelines that ingest data into a data lake.
  • Design and develop scalable ETL pipelines using multiple sources of data in various formats between a data lake and data warehouse.
  • Collaborate with data scientists on preparing the data needed to implement AI models in accordance with project requirements and AI data engineering standards.
  • Develop, construct, test, and maintain architectures, such as databases and large-scale processing systems.
  • Recommend and implement ways to improve data reliability, efficiency, and quality.
  • Develop datasets and databases with machine learning tools to solve real-time business problems.
  • Educate the organization both from IT (including external partners) and the business perspectives on data ingestion and data streaming tools, processes, and best practices.
  • Support the maturation of AI platforms, modules, and services that address cross-enterprise opportunities through market research and proof-of-concepts.
  • Establish strong relationships with external partners to ensure strong delivery, innovation, and ongoing improvement in receiving high-value services.
  • Develop and improve processes for data modeling, mining, and production.
  • Collaborate with stakeholders including the Product owner, data science, and design teams to assist with data-related technical issues, support their data infrastructure needs, and identify opportunities for improvements.
  • Translate business needs into technical requirements.
  • Provide technology or services ownership direction on all matters related to a key functional area – to associated functional leads and peers.
  • Provide technology-specific financial inputs related to both data engineering toolset costs and monthly data consumption costs.
  • Responsible for working with stakeholders related to a key functional area to ensure synergistic collaboration and attain shared goals.
  • Responsible for ensuring consistency of data engineering tools and processes across projects/products.
  • Provide business and technical inputs to Business Governance and Operational Management Committees, as appropriate.
  • Responsible for handling the high amount of technology complexity and driving autonomous decision-making, as it relates to the adoption of technology best practices.

Requirements & Skills:

  • A relevant University degree/technical certification, and/or relevant experience commensurate to the role.
  • 5+ years of software engineering experience with a minimum of 1 year working with modern data platforms and cloud technology as a data engineer collaborating on the development and implementation of machine Learning models.
  • Familiarity and/or strong interest in machine learning, statistical models, and AI.
  • Strong background in cloud computing services like Microsoft Azure.
  • Experience with real-time messaging systems (pub/sub, Kafka).
  • Experience with writing automated and functional tests.
  • Experience in feature engineering.
  • Excellent business acumen and communication skills.
  • Forward Thinking – Anticipating the implications and consequences of situations and taking appropriate action to be prepared for possible contingencies.
  • Analytical Thinking – Approaching a problem by using a logical, systematic, sequential approach.
  • Demonstrate significant technical depth to handle strategic technology priorities.
  • Proficiency in (or in similar) technologies:
    • Relational databases like SQL Server and non-relational databases like Cosmos.
    • Python programming skills with good OOP, functional, and/or analytical experience.
    • Azure Databricks, Data Lake Store, Data Factory.
    • Spark, including Delta Lake architecture principles to handle both batch and streaming data ingestion scenarios.
  • The ideal candidates should be members or qualify for admission to a provincial engineering professional association.

apply for job button