Content Integrity Analyst

OpenAI

OpenAI

IT
San Francisco, CA, USA
Posted on Jan 13, 2026

Location

San Francisco

Employment Type

Full time

Department

Go To Market

Compensation

  • $280K • Offers Equity

The base pay offered may vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. If the role is non-exempt, overtime pay will be provided consistent with applicable laws. In addition to the salary range listed above, total compensation also includes generous equity, performance-related bonus(es) for eligible employees, and the following benefits.

  • Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts

  • Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)

  • 401(k) retirement plan with employer match

  • Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)

  • Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees

  • 13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)

  • Mental health and wellness support

  • Employer-paid basic life and disability coverage

  • Annual learning and development stipend to fuel your professional growth

  • Daily meals in our offices, and meal delivery credits as eligible

  • Relocation support for eligible employees

  • Additional taxable fringe benefits, such as charitable donation matching and wellness stipends, may also be provided.

More details about our benefits are available to candidates during the hiring process.

This role is at-will and OpenAI reserves the right to modify base pay and other compensation components at any time based on individual performance, team or company results, or market conditions.

About the Team

Trust & Safety Operations is central to protecting OpenAI’s platform, customers, and the public from abuse. We support a diverse customer base -- from individual users and early-stage startups to global enterprises -- across ChatGPT, our API, and new product surfaces as they launch.

Within the Support organization, we partner closely with Product, Engineering, Legal, Policy, Go To Market, and Operations teams to deliver a great user experience at scale while reducing material harm and mitigating catastrophic risks.

About the Role

We’re hiring experienced Trust & Safety / Content Integrity operators who can investigate complex cases, apply and evolve usage policy in real-world scenarios, and help build scalable systems that reduce risk over time. You will contribute as a subject-matter expert (SME) on high-stakes escalations, partnering with cross-functional stakeholders to drive fast, defensible outcomes. You will also help design the processes, tooling, and automation that power safe operations at scale.

This role is ideal for someone who combines strong judgment with sharp analytical instincts, and who enjoys turning ambiguity into clear decisions, repeatable workflows, and durable automation.

Please note: This role may involve handling sensitive content, including material that may be highly confidential, sexual, violent, or otherwise disturbing.

Location: San Francisco, CA (hybrid: 3 days in office/week).

In this role you will:

  • Apply usage policy with rigor and nuance: Interpret and apply OpenAI’s usage policies to complex, novel scenarios; provide clear guidance to customers and internal teams; document edge cases and propose policy refinements.

  • Mitigate material harm and catastrophic risks: Triage, assess, and support actions on content and behavior that can drive real-world harm, including high-severity domains; escalate appropriately and help drive cases to resolution.

  • Serve as an escalation SME for high-stakes cases: Support incident response and executive-visible escalations by producing clear assessments, recommending next steps, and coordinating with Legal, Compliance, Security, Product, and Engineering as needed.

  • Build scalable trust workflows: Design and operate processes for human-in-the-loop labeling, content/user reporting, appeals, enforcement actions, and continuous QA -- with a high bar for quality and consistency.

  • Drive automation and operational efficiency: Identify repeatable patterns, translate them into requirements, and partner with Engineering and Data teams to ship tooling and automation (including LLM-enabled automation) that improves speed, accuracy, and coverage.

  • Analyze trends and strengthen feedback loops: Use quantitative and qualitative analysis to surface emerging abuse patterns, measure policy and tooling performance, and feed insights back into detection systems, product mitigations, and policy updates.

  • Raise the quality bar: Define and monitor KPIs, build calibration and QA programs, iterate on reviewer training, and improve guidelines and tooling based on error analysis.

  • Enable internal and external teams: Create playbooks, SOPs, and training that help partner teams understand our enforcement posture, risk thresholds, and operational philosophy.

You might thrive in this role if you:

  • Build for scale: You’ve taken workflows from zero to one and then scaled them without sacrificing quality.

  • Bring deep Trust & Safety experience: 5+ years in Trust & Safety, integrity, risk, policy enforcement; experience working with vendors is a plus.

  • Have strong judgment under ambiguity: You can make defensible calls in gray areas, write clearly, and adjust quickly as new information arrives.

  • Are analytically strong: You can assess risk, spot trends, and use data to prioritize problems and evaluate solutions (data fluency is a plus).

  • Bias toward automation: You’ve shipped operational efficiencies through tooling, process redesign, and automation; you’re comfortable leveraging LLMs to improve triage, labeling, QA, or enforcement consistency.

  • Operate well cross-functionally: You can translate nuanced operational reality to engineers and policy stakeholders, and drive alignment without escalation-by-default.

  • Stay humble and collaborative: You learn quickly, share context generously, and optimize for team outcomes.

  • Have experience with high-severity safety domains (for example: CBRN, cyber abuse).

  • Have experience building QA programs, calibration loops, and measurable reviewer performance systems.

  • Have hands-on experience writing requirements for internal tools, piloting automation, or partnering closely with Engineering on safety systems.

About OpenAI

OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.

We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic.

For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.

Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.

To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.

We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.

OpenAI Global Applicant Privacy Policy

At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.

Compensation Range: $280K