LH
LLMHire
Browse JobsAgentsNewSalary InsightsCompaniesBlogPricing

Never Miss an AI Job

Get weekly AI job alerts delivered to your inbox.

Join the AI hiring radar. Unsubscribe anytime.

LH
LLMHire

The #1 job board for AI & LLM engineers. Find your next role in the AI revolution.

Jobs

  • Browse Jobs
  • Companies
  • Job Alerts
  • Post a Job
  • Pricing

Resources

  • Blog
  • CyberOS.devScan code for vulnerabilities
  • EndOfCoding.comStay ahead with AI news
  • Vibe Coding AcademyLearn skills employers want
  • Vibe Coding Ebook22 chapters, 200+ prompts
  • Video Tutorials@endofcoding on YouTube

Company

  • About
  • Contact
  • Privacy
  • Terms

Contact

  • hello@llmhire.com
  • Get in Touch

© 2026 LLMHire. All rights reserved.

VeriduxLabsBuilt by VeriduxLabs
Back to all jobs
F

Member of Technical Staff, Applied Research

Fireworks AI
San Mateo, CAOnsite4 days ago
full-timeseniorcustom

About the Role

<div class="content-intro"><h2><strong>About Us:</strong></h2> <p data-start="107" data-end="729">At Fireworks, we’re building the future of generative AI infrastructure. Our platform delivers the highest-quality models with the fastest and most scalable inference in the industry. We’ve been independently benchmarked as the leader in LLM inference speed and are driving cutting-edge innovation through projects like our own function calling and multimodal models. Fireworks is a Series C company valued at $4 billion and backed by top investors including Benchmark, Sequoia, Lightspeed, Index, and Evantic. We’re an ambitious, collaborative team of builders, founded by veterans of Meta PyTorch and Google Vertex AI.</p></div><p>The Applied Researcher role is designed for engineers who love working across ML, systems, and real-world products, and thrive on working directly with customers to bring advanced models into production.</p> <h3>About the Role</h3> <p>As an Applied Researcher, you will sit at the intersection of ML research, systems engineering, and customer-facing problem solving. You’ll work hands-on with customers and customer data to tune, evaluate and deploy models using various techniques such as SFT / DPO / RL, to help customers build competitive models using their unique data tailored to their unique products.</p> <p>You will be the technical bridge between customer needs, customer data, and our tuning and serving infrastructure, helping shape the future of applied AI.</p> <h3>Minimum Qualifications</h3> <ul> <li>BS/MS in Computer Science, Electrical Engineering, Machine Learning, or a related field, or equivalent practical experience, open to all levels of experiences.</li> <li>Strong experience with PyTorch and modern Transformer architectures.</li> <li>Solid computer science fundamentals: data structures, algorithms, concurrency, distributed systems, networking.</li> <li>Hands-on experience training, fine-tuning, or evaluating machine learning models, preferably LLMs.</li> <li>Familiarity with recent developments in the LLM research domain, including model architectures, training methods, and evaluation strategies.</li> <li>Passion for partnering with customers: understanding their constraints, co-designing solutions, and iterating based on real-world feedback.</li> <li>Curiosity and enthusiasm for exploring a wide range of problem domains and project types - from quick experiments to long-running, complex engagements.</li> <li>Ability to operate in a fast-paced, ambiguous environment and drive projects independently.</li> </ul> <h3>Preferred Qualifications</h3> <ul> <li>Experience working directly with customers to deliver end-to-end modeling solutions, from understanding their data and product requirements to deploying tuned models in production.</li> <li>Strong familiarity with evaluation methodologies for LLMs (benchmarks, custom evals, error analysis).</li> <li>Proficiency in diagnosing system-wide problems that hinder customers from achieving desirable outcomes.</li> <li>Deep understanding of tuning techniques (SFT, DPO, RL) and the underlying mathematical principles.</li> <li>Knowledge of infrastructural components that enterprises commonly use, such as databricks, S3/GCS storage, SageMaker, artifact registry etc</li> <li>Familiarity with cloud-native tooling (containers, Docker, Kubernetes, or similar).</li> </ul><div class="content-conclusion"><h2><strong>Why Fireworks AI?</strong></h2> <ul> <li>Solve Hard Problems: Tackle challenges at the forefront of AI infrastructure, from low-latency inference to scalable model serving.</li> <li>Build What’s Next: Work with bleeding-edge technology that impacts how businesses and developers harness AI globally.</li> <li>Ownership &amp; Impact: Join a fast-growing, passionate team where your work directly shapes the future of AI—no bureaucracy, just results.</li> <li>Learn from the Best: Collaborate with world-class engineers and AI researchers who thrive on curiosity and innovation.</li> </ul> <p><em>Fireworks AI is an equal-opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all innovators.</em></p></div>

Required Skills

PyTorchKubernetesDockerScalaRAGFine-tuningMCP

About Fireworks AI

Fast and affordable AI inference platform for production workloads.

Visit Company Website

Ready to Apply?

Join Fireworks AI and work on cutting-edge AI technology