Applied ML Engineer

ML – Full time

Seattle, WA

Summary

We are seeking a talented and experienced Python NLP and LLM developer to join our team. The ideal candidate will have a strong understanding of natural language processing and machine learning, as well as experience integrating with large language models and building products using its API.

Qualifications

  • The ideal candidate will have a strong understanding of natural language processing and machine learning, as well as experience integrating with large language models.
  • Master’s degree in computer science, Artificial Intelligence, Natural Language Processing, or a related field.
  • 1-2 years of experience developing NLP models, Predictive AI Solutions.
  • Experience developing NLP applications such as machine translation, text summarization, and question-answering.
  • Strong understanding of natural language processing, machine learning, and Deep learning algorithms.
  • Excellent Experience with Python and working with Jupyter notebooks.
  • Experience working with NLP libraries like Spacy / PyTorch.
  • Experience with Git, CI/CD, and Linux environments.
  • Experience working with Agile development teams.
  • Excellent communication and collaboration skills.
  • Experience with deep learning architectures such as RNNs, CNNs, and Transformers.
  • Experience with transfer learning and fine-tuning of large language model.

Responsibilities

  • language models and deploying it as an API.
  • Develop and implement state-of-the-art NLP and LLM solutions.
  • Design, engineer, and evaluate NLP pre-processing, model training, and inference pipelines.
  • Integrate with LLMs like OpenAI for implementing retrieval augmented generation, synthetic content generation, creating abstractive summaries etc.
  • Ability to benchmark and evaluate multiple models.
  • Evaluate and improve NLP and LLM performance.

Bonus Qualifications

Experience with deep learning architectures such as RNNs, CNNs, and Transformers Experience with transfer learning and fine-tuning of large language model.