To apply for this job you must first either login or register

Machine Learning Engineer

Ontario  - Permanent



Job Description

As a Machine Learning Engineer on this team, you would be responsible for training, testing, and deploying the data science pipeline: from data extraction and transformation to modelling this data so that it can be efficiently served via our APIs. Our client partner is looking for an experienced Machine Learning Engineer who will bring subject matter expertise and best practices to all aspects of working with big data.

Your team will own
- Collaboration with ML Researcher to add new ML services into pipeline
- Data distribution and processes for large-scale ML pipelines
- Design of user-in-the-loop pipelines for active learning
- Leadership in the adoption of best practices when working with Big Data

In a typical week, you might
- Improve ML models several times over
- Automate various components of the training/test pipelines and model deployments
- Importing new clinical data to work with our internal ontologies
- Resolve performance issues with various aspects of the pipeline and application

About you
- Care about improving healthcare using the latest in machine learning and artificial intelligence
- Own problems end-to-end in a very ownership driven culture
- Care more about reviewing and adopting industry wide data science practices to solving problems as opposed to writing a lot of code
- Comfortable massaging and cleaning up datasets for loading


Special Perks:

- Work with awesome people that support and challenge one another to bring out the best in each other
- Leadership positions as we continue to grow the team
- Competitive salary and participation in company success through employee stock options
- Health, dental, and vision benefits
- Conference participation and publishing opportunities in ML, AI, NLP, and Bioinformatics
- Catered/company lunches every Monday and Fridays
- Retreats and outings to bond with your team
- Unlimited coffee and snacks


Must Have Skills:

- Experience architecting and designing high performance server-side components and big data processing pipelines using popular libraries and frameworks
- Experience with Tensorflow (or similar libraries) from GPU training, efficient input pipelines (queues, Dataset API, and the like), to deployment of packaged/compiled models, and distributed computing (sharding, clusters, etc.)
- Experience resource management service workflow: queue systems (RabbitMQ, Kafka), AWS (RDS, S3, Kubernetes), build systems (Ansible, Terraform, Jenkins, etc.), Docker.
- Experience with big data technologies Postgres, Cassandra, Spark, Hadoop, Hive
- Experience with developing production code in Python, Java/Scala and/or C/C++


Nice to Have Skills:

- Masters in Computer Science or a quantitative field and at least 1 year of industry experience.


Details:
Starting: ASAP
To apply for this job you must first either login or register