Join Our Team
Help us build the future of AI training by capturing authentic human experiences. We're looking for passionate people who want to make a meaningful impact.
Open Positions
Director of Data Operations
LifeTrade is building the first intelligence marketplace for longitudinal human reasoning data: real decisions, over time, in real lives. The Director of Data Operations will own core pipelines that transform this reasoning into research-grade training data for AI labs.
About LifeTrade
LifeTrade sells “wisdom-grade” training data for agentic AI: longitudinal, in-situ human reasoning captured ethically at scale, at a predictable intelligence-as-a-service price point. Where others optimize for answers, LifeTrade optimizes for how decisions are made over time—context, emotion, environment, and trade-offs in real situations, not sandbox prompts. As web-scale text and synthetic corpora hit diminishing returns, labs are turning to post-training and evaluation data that captures real human judgment. LifeTrade is positioned to be that substrate, with transparent sourcing, explicit consent, and fair wages creating both an ethical moat and a regulatory advantage.
About Data Operations at LifeTrade
Data Operations runs the reasoning factory that turns raw conversational sessions into structured, high-fidelity datasets for leading AI labs. The team’s mandate is to design new pipelines, test whether they unlock specific model capabilities, and then scale them with tight control over quality, cost, and worker experience. This involves working from a target capability (e.g., longitudinal planning, trade-offs under constraint, preference formation), experimenting with different data shapes and protocols, and then industrializing what works into robust, observable pipelines with clear metrics and QA. As Director of Data Operations, you will own a major slice of LifeTrade’s production engine and lead the team that makes our datasets real. You’ll be responsible for taking new reasoning-data products from fragile prototypes to stable, scalable pipelines that customers can depend on. You will sit at the intersection of research, product, and operations, acting as the operational counterpart to our senior team and research leads as we scale from early pilots to a true intelligence marketplace. This is a hands-on leadership role for an AI-native operator who is as comfortable in SQL and experiment dashboards as in 1:1s and execution reviews.
What you’ll do
-Build and lead a small team of Data Operations / Data Product Operations leads to design, launch, and scale LifeTrade pipelines from 0→1 experiments to 1→N production systems.
-Partner with executive leadership on how we evolve delivery capabilities, staffing models, and systems as we onboard new labs, use cases, and data products.
-Continuously monitor pipeline health across sourcing, worker experience, quality, throughput, and unit economics; own detection, diagnosis, and resolution of issues end-to-end.
-Use metrics to run the function: define and track KPIs such as hours captured, dataset readiness SLAs, quality scores, rework rates, and margin by pipeline.
-Work closely with AI labs to translate desired model capabilities into concrete data workflows, protocols, and project plans, including evaluation hooks.
-Partner with internal research to design and test new “shapes” of reasoning data (longitudinal studies, scenario designs, agent benchmarks) and operationalize them.
-Own outcomes across functions: design systems, coordinate cross-functional execution, manage vendors/contractors, and ensure reliable delivery at both prototype and production scale.
You might be a fit if
-You have 2–5 years of post-graduate experience running complex projects or systems (e.g., research lab operations, large academic studies, applied data/ML teams, or ops/product roles in tech).
-You hold at least a master’s degree in a quantitative or systems field (e.g., CS, statistics, operations research, industrial engineering, HCI, information science); a PhD is strongly preferred.
-You are genuinely AI-native: you use modern AI tools daily in your own workflow, understand foundation models and RLHF-style pipelines at a conceptual level, and can reason clearly about data requirements for training and evaluation.
-You’ve designed or scaled processes, experiments, or teams across changing requirements—whether in academia, industry, or startup environments—and you are comfortable owning outcomes, not just running tasks.
-You combine high standards and attention to detail with a bias toward action, and you stay steady in ambiguous, half-built environments.
-You are low-ego, collaborative, and motivated by building an ethical, transparent intelligence marketplace rather than chasing vanity metrics.
Nice to have
-Experience with data, ML, RLHF / evaluation, or other large-scale human-in-the-loop operations.
-Background in CS, industrial engineering, operations research, or similar quantitative / systems fields.
AI Training Data Researcher (PhD/Master's Student)
Requirements: Must be AI fluent — deep understanding of modern training pipelines required
About LifeTrade
LifeTrade is building the next generation of training data for AI alignment. We capture longitudinal, causal, grounded human reasoning—the kind of authentic preference data that frontier AI labs need for RLHF (Reinforcement Learning from Human Feedback) and Direct Preference Optimization (DPO). Our data model is grounded in emerging academic frameworks like Georgia Tech's LEAF (Lived Experience-centered AI) methodology. We're solving the preference data bottleneck that limits how well AI systems can learn genuine human judgment.
The Role
We're looking for a graduate-level researcher to help us bridge academic AI research and our go-to-market strategy. This role sits at the intersection of cutting-edge ML research and commercial AI infrastructure.
Research & Literature Review
- Track latest papers on RLHF, DPO, preference learning, AI alignment, and data quality
- Monitor key researchers and labs working on alignment and human feedback systems
- Identify academic frameworks that validate our data model
Academic Outreach & Collaboration
-Build and lead a small team of Data Operations / Data Product Operations leads to design, launch, and scale LifeTrade pipelines from 0→1 experiments to 1→N production systems.
-Partner with executive leadership on how we evolve delivery capabilities, staffing models, and systems as we onboard new labs, use cases, and data products.
-Continuously monitor pipeline health across sourcing, worker experience, quality, throughput, and unit economics; own detection, diagnosis, and resolution of issues end-to-end.
-Use metrics to run the function: define and track KPIs such as hours captured, dataset readiness SLAs, quality scores, rework rates, and margin by pipeline.
-Work closely with AI labs to translate desired model capabilities into concrete data workflows, protocols, and project plans, including evaluation hooks.
-Partner with internal research to design and test new “shapes” of reasoning data (longitudinal studies, scenario designs, agent benchmarks) and operationalize them.
-Own outcomes across functions: design systems, coordinate cross-functional execution, manage vendors/contractors, and ensure reliable delivery at both prototype and production scale.
You might be a fit if
-You have 2–5 years of post-graduate experience running complex projects or systems (e.g., research lab operations, large academic studies, applied data/ML teams, or ops/product roles in tech).
-You hold at least a master’s degree in a quantitative or systems field (e.g., CS, statistics, operations research, industrial engineering, HCI, information science); a PhD is strongly preferred.
-You are genuinely AI-native: you use modern AI tools daily in your own workflow, understand foundation models and RLHF-style pipelines at a conceptual level, and can reason clearly about data requirements for training and evaluation.
-You’ve designed or scaled processes, experiments, or teams across changing requirements—whether in academia, industry, or startup environments—and you are comfortable owning outcomes, not just running tasks.
-You combine high standards and attention to detail with a bias toward action, and you stay steady in ambiguous, half-built environments.
-You are low-ego, collaborative, and motivated by building an ethical, transparent intelligence marketplace rather than chasing vanity metrics.
Nice to have
-Experience with data, ML, RLHF / evaluation, or other large-scale human-in-the-loop operations.
-Background in CS, industrial engineering, operations research, or similar quantitative / systems fields.