Senior MLOps Engineer
Johannesburg Ward 102, South Africa
Duration
6
MONTHS
Negotiable
Ref
Nombuso Sibeko
Starts
ASAP
Opened On
21/01/2025
Required Skills
experience with aws
aws lambda
python
code refactoring
mentoring
ecs
sql
architecture
pipelines
containerization
Job Description

Role Overview:

As a Senior MLOps Engineer, you will play a crucial role in the deployment and maintenance of machine learning and AI

solutions developed by our data science team. This role requires a unique blend of deep technical knowledge and practical

experience in data engineering, machine learning operations (MLOps), and AWS cloud services, along with strong

interpersonal, problem-solving and leadership skills.

Key Responsibilities:

• Design, implement and maintain MLOPs solutions on cloud (AWS\Databricks)

• Implement and manage MLOps pipelines to streamline the deployment of machine learning models in cloud (AWS\Databricks).

• Develop and maintain reusable feature stores, robust data architectures, and efficient data engineering practices.

• Data science model review, code refactoring and optimization, containerization, deployment, versioning, and monitoring of its quality

• Work closely with risk and governance teams to ensure compliance and security in cloud environments (AWS/Databricks).

• Establish and enforce best practices and standards for MLOps within the cluster.

• Provide technical leadership and coaching to junior team members.

• Coordinate cross-functionally with various technical teams to facilitate the integration of AI solutions into business processes.

• Continuously monitor and optimize the performance of deployed machine learning solutions

• Meaningfully contribute to & ensure solutions align to the design & direction of the group architecture, cloud governance, data standards, principles, preferences & practices. Short term deployment must align to strategic long-term delivery.

Technical skills:

• Strong understanding of MLOps practices and principles, including CI/CD pipelines, version control, and model deployment.

• Ability to design and implement MLOps pipelines in cloud (Databricks, AWS)

• Expertise in machine learning algorithms (frameworks such as scikit-learn, Keras, PyTorch, Tensorflow),

• Proficiency in programming languages such as Python, PySpark, Spark

• Familiarity with big data technologies and frameworks (e.g., Hadoop, Spark).

• Hands-on experience with AWS services for data processing such as AWS Glue, AWS Lambda, Amazon S3 and Amazon SageMaker.

• Proficiency in AWS management and deployment tools, including AWS CloudFormation, AWS CLI, AWS CodePipeline, AWS CodeBuild, AWS CodeDeploy, Amazon ECS, Amazon EKS, AWS Step Functions and Github

• Experience in designing and implementing scalable data architectures.

• Familiarity with building and maintaining feature stores.

• Knowledge of database management, SQL, ETL processes and data warehousing principles.

• Ability to translate complex technical concepts into understandable terms for non-technical stakeholders.

• Understanding of data security, privacy, and compliance standards relevant to the banking industry.

• Experience coordinating with risk and governance teams to ensure secure and compliant solutions.

• Strong communication skills for effectively collaborating with cross-functional teams and mentoring junior staff.

Minimum Requirements:

• Bachelor’s degree in Information Technology, Computer Science, Software Development, Engineering, or a related field.

• Minimum 7 years of post-graduate experience in a data engineering or MLOps role.

• Minimum 3 years’ experience working with Databricks or AWS cloud services.

• AWS Machine Learning speciality certification is preferred.

• Proficiency in machine learning, data engineering, and cloud-based architectures.

• Excellent problem-solving skills and ability to work in a fast-paced environment.

• Strong communication and leadership skills.