AI/ML and Data Science Developer
Hybrid - Stevenage 3/4dpw
£70,000 - £80,000
About the Role:
We are looking for an AI/ML and Data Science Developer to join a growing team working on cutting-edge Generative AI and Data Science solutions. This role offers the opportunity to design, develop, and deploy advanced AI/Machine Learning models, including Large Language Models (LLMs) and GenAI, while working with diverse datasets and modern deployment technologies.
You'll play a key role in data processing, model fine-tuning, and deployment, helping to shape innovative solutions that make a tangible impact.
Key Responsibilities:
- Design, develop, and deploy AI/Machine learning models and solutions, including LLMs and GenAI.
- Fine-tune and evaluate open-source LLMs, applying techniques such as prompt engineering and model re-tuning.
- Work with a variety of structured and unstructured datasets, handling preprocessing, cleaning, and feature engineering.
- Develop pipelines for creating, preparing, and optimising data for modelling.
- Deploy models to production using containerisation tools (Docker, Kubernetes) and ensure scalability, robustness, and monitoring.
- Research and apply the latest advancements in Generative AI and data science.
- Support model evaluation, logging, and performance tuning across GPU-based environments.
- Collaborate with stakeholders to gather requirements and deliver solutions aligned to business needs.
- Document workflows, data pipelines, and model processes for knowledge transfer and reproducibility.
Key Skills & Experience:
- 4-5 years' experience across AI/ML, data science, or data engineering, with recent hands-on work in GenAI.
- Proven experience fine-tuning and deploying open-source LLMs.
- Strong knowledge of AI/ML algorithms and techniques (supervised, unsupervised, reinforcement learning).
- Solid background in data preprocessing, wrangling, and feature engineering.
- Proficiency in Python (essential) and familiarity with relevant libraries (NumPy, pandas, scikit-learn, TensorFlow, PyTorch).
- Experience with prompt engineering and model evaluation.
- Deployment experience using Docker or other containerisation tools.
- Exposure to GPU-based environments for large-scale model training and tuning.
- Experience with big data tools (Spark, Hadoop) and cloud platforms (AWS, GCP, Azure) is a plus.
- Strong analytical mindset with the ability to translate data into actionable insights.
If you or someone you know of might be interested, please contact and ask for Ben.
