Upwork is hiring a LLM Research/implementation Engineer - Contract to Hire

LLM Research/implementation Engineer - Contract to Hire

Upwork  ·  US  ·  $75k/yr - $210k/yr
about 2 years ago

Join Arcee: Pioneers in Custom AI Language Models

About Us:

Arcee is a trailblazer in the AI industry, specializing in the creation of custom, in-domain large language models tailored to the unique tasks and requirements of our clients.

The Opportunity:

We are on the hunt for a driven professional with expertise in pretraining and fine-tuning Large Language Models (LLMs). The ideal candidate will have experience in transforming datasets into embeddings and training LLMs from scratch, with a focus on datasets of 65 billion tokens or more.

A key part of the role involves identifying high-quality tokens within a dataset, extracting them from the entire corpus, and synthetically generating high-quality text (tokens) using GPT 3.5/4.

Familiarity with in-domain pretraining methods as outlined in this paper (https://arxiv.org/abs/2306.11644) is essential, as is experience with fine-tuning methods such as PEFT, QloRA, and TRL. Proficiency in optimization using DeepSpeed and Hugging Face Optimum is also a must.

Your Skills:

LLM Pretraining

PyTorch

Transformers

PEFT

QloRA

Hugging Face Optimum

DeepSpeed

This role is initially a part-time contracting position. However, as a dynamic start-up, we anticipate additional responsibilities and opportunities may arise, potentially leading to a full-time position.

Your Role:

Phase 1: LLM Pretraining

You will assist in pretraining a 7 billion parameter, in-domain LLM using the methods outlined in the "Textbooks is all you need" paper.

Phase 2: Fine-tuning

You will integrate best-in-class fine-tuning methods into the Arcee fine-tuning toolkit.

Join us at Arcee and be part of the future of custom AI language models.

Job is closed

This job is already closed and no longer accepting applicants, sorry.