LH
LLMHire
Browse JobsAgentsNewSalary InsightsCompaniesBlogPricing

Never Miss an AI Job

Get weekly AI job alerts delivered to your inbox.

Join the AI hiring radar. Unsubscribe anytime.

LH
LLMHire

The #1 job board for AI & LLM engineers. Find your next role in the AI revolution.

Jobs

  • Browse Jobs
  • Companies
  • Job Alerts
  • Post a Job
  • Pricing

Resources

  • Blog
  • CyberOS.devScan code for vulnerabilities
  • EndOfCoding.comStay ahead with AI news
  • Vibe Coding AcademyLearn skills employers want
  • Vibe Coding Ebook22 chapters, 200+ prompts
  • Video Tutorials@endofcoding on YouTube

Company

  • About
  • Contact
  • Privacy
  • Terms

Contact

  • hello@llmhire.com
  • Get in Touch

© 2026 LLMHire. All rights reserved.

VeriduxLabsBuilt by VeriduxLabs
Back to all jobs
C

Infrastructure Hardware Technical Program Manager (Server and Network Systems)

Cerebras
Sunnyvale CA or Toronto CanadaOnsite4 days ago
full-timeleadgpt-5customopen-source

About the Role

<div class="content-intro"><p><span data-contrast="none">Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.&nbsp;</span><span data-ccp-props="{"134233117":false,"134233118":false,"201341983":0,"335559685":0,"335559737":240,"335559738":240,"335559739":240,"335559740":279}">&nbsp;</span></p> <p>Cerebras' current customers include top model labs, global enterprises, and cutting-edge AI-native startups.&nbsp;<a href="https://openai.com/index/cerebras-partnership/">OpenAI recently announced a multi-year partnership with Cerebras</a>, to deploy 750 megawatts of scale, transforming key workloads with ultra high-speed inference.&nbsp;</p> <p>Thanks to the groundbreaking wafer-scale architecture, Cerebras Inference offers the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. This order of magnitude increase in speed is transforming the user experience of AI applications, unlocking real-time iteration and increasing intelligence via additional agentic computation.</p></div><p>As an Infrastructure Hardware Technical Program Manager (Server and Network Systems) on the Cluster Architecture Team, you will drive end-to-end delivery of server and network platform programs across Cerebras CS-3–based AI clusters — from requirements and vendor selection through lab bring-up, qualification, and production rollout. You will be the execution owner for multi-team programs spanning OEM/ODM partners, component vendors, internal software/runtime teams and architects, validation/QA, and deployment/operations.</p> <p>This role is intentionally technical: you must understand server, network, and system-level trade-offs well enough to run effective technical reviews, keep programs grounded in real constraints, and maintain a crisp decision trail - while partnering closely with the Compute / Server / Network Platform Architects for detailed technical direction and sign-off. You will also build shared understanding with our rack/elevations and physical datacenter design partners so that server and network changes land smoothly in real deployments (without owning physical DC design).</p> <p><strong>Responsibilities</strong></p> <ul> <li>Own end-to-end program execution for server systems and network equipment in Cerebras clusters, including new platforms, refreshes, and major component/config changes.</li> <li>Drive requirements gathering and convert inputs into executable plans with clear milestones, readiness gates, and cross-functional deliverables.</li> <li>Represent Cluster Architecture in executive reviews, OKR cycles, and leadership/customer forums as needed.</li> <li>Build and manage integrated schedules across vendors and internal teams, track dependencies, critical path, and risks.</li> <li>Manage OEM/ODM and switch/vendor engagements (RFI/RFP, samples, escalations, roadmap alignment).</li> <li>Partner with Compute / Server Platform / Network Architects to turn architectural decisions into qualification plans, acceptance criteria, and rollout strategies.</li> <li>Lead qualification and release readiness (lab/staging validation, regression tracking, go/no-go decisions).</li> <li>Own risk and change management into production, including versioning, rollout sequencing, and stakeholder communication.</li> <li>Ensure operational readiness with deployment and fleet teams and maintain alignment with rack/physical DC owners on power, cooling, space, and cabling constraints.</li> </ul> <p><strong>&nbsp;</strong></p> <p><strong>Skills and Qualifications</strong></p> <ul> <li>B.S. or M.S. in Computer Science, Electrical/Computer Engineering, or equivalent experience.</li> <li>8+ years in Technical Program Management (or similar delivery leadership) for server, network, or infrastructure platforms from concept through production.</li> <li>Experience coordinating complex server and/or datacenter network programs across OEM/ODMs, switch vendors, and internal engineering teams.</li> <li>Working knowledge of server architecture (CPU/NUMA, memory bandwidth, PCIe, NIC and storage IO) and enough networking fundamentals (leaf-spine fabrics, switch platforms, high-performance interconnects) to run effective technical reviews.</li> <li>Familiarity with Linux server fleet management (provisioning, firmware/BIOS, drivers, field triage).</li> <li>Strong multi-team program execution skills: integrated plans, risk management, dependency tracking, and executive-level communication.</li> <li>Ability to operate in ambiguity and keep parallel server and network workstreams aligned.</li> <li>Experience with AI/ML, HPC, or performance-sensitive distributed infrastructure is a plus.</li> </ul><div class="content-conclusion"><h4><strong>Why Join Cerebras</strong></h4> <p>People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection&nbsp; point in our business. Members of our team tell us there are five main reasons they joined Cerebras:</p> <ol> <li>Build a breakthrough AI platform beyond the constraints of the GPU.</li> <li>Publish and open source their cutting-edge AI research.</li> <li>Work on one of the fastest AI supercomputers in the world.</li> <li>Enjoy job stability with startup vitality.</li> <li>Our simple, non-corporate work culture that respects individual beliefs.</li> </ol> <p>Read our blog:&nbsp;<a href="https://www.cerebras.net/blog/5-reasons-to-join-cerebras" target="_blank" data-auth="NotApplicable" data-linkindex="0">Five Reasons to Join Cerebras in 2026.</a></p> <h4>Apply today and become part of the forefront of groundbreaking advancements in AI!</h4> <hr> <p><em>Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer.&nbsp;</em><em>We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. </em><em>We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.</em></p> <hr> <p><em>This website or its third-party tools process personal data. For more details, click <a href="https://www.cerebras.net/privacy/" target="_blank">here</a> to review our CCPA disclosure notice.</em></p></div>

Required Skills

GoScalaRAGAgent Orchestration

About Cerebras

Building the largest AI chips in the world for training massive models.

Visit Company Website

Ready to Apply?

Join Cerebras and work on cutting-edge AI technology