Data Engineer

New Yesterday

Salient is one of the fastest-growing AI startups in consumer finance. In less than two years, we’ve achieved product-market fit, scaled to 8-figure ARR, and emerged as one of the undisputed leaders in financial voice AI.
A few fast facts: Backed by YC and raised the largest Series A for a B2B startup from a16z
Reached product-market fit in 19-person team building a speech AI agent that handles millions of real customer calls per day, and fully deployed in production across major financial institutions (not just PoCs)
We’re on a mission to pass the Turing test for conversational speech in a telephony setting
In-person office culture in San Francisco, CA
About the Role We are seeking a skilled Data Engineer and Analyst combo who can analyze data and design, implement, and maintain our data infrastructure. The ideal candidate will work with customers to understand key metrics needed, build efficient data pipelines and queries to surface them, and collaborate with cross-functional teams to deliver reliable data solutions that drive business insights.
Responsibilities Data workflows: Design, build, and maintain scalable data pipelines using ETL/ELT processes to collect, process, and transform data from various sources
Customer engagement: Work with customers to understand key metrics needed and analyze data to provide solutions
Data modeling: Develop and optimize database schemas, data models, and data warehousing solutions
Data quality: Implement data validation and quality checks to ensure data integrity throughout the pipeline
Documentation: Create and maintain documentation for data processes, pipelines, and architectures
Engineering Support: Monitor, troubleshoot, and optimize data systems for performance, reliability, and cost-efficiency
Data automation: Automate routine data operations and establish monitoring systems for data flows
Requirements 3+ years of experience in data engineering or similar role
Strong programming skills in Python, SQL, and shell scripting
Experience with data processing and transformation frameworks (e.g., Apache Spark, Airflow, Kafka, DBT)
Proficiency in designing and optimizing database schemas (relational and NoSQL)
Knowledge of data warehousing solutions
Familiarity with cloud platforms (AWS, GCP, or Azure) and their data services
Ability to work 4 days a week from our San Francisco office (open to candidates willing to relocate)
Nice to Have Familiarity with data governance frameworks and compliance requirements
Experience with real-time data processing systems
As an early-stage company building at the frontier of AI, we work with high intensity and commitment. While schedules can vary by role/team, many weeks will demand extra focus, flexibility and time particularly during major launches and high impact sprints. We're seeking those who are aligned to and able to commit to that expectation which includes 4 days per week in our San Francisco Office. Compensation Range: $180K - $250K
Location:
San Francisco
Salary:
$180,000 - $250,000 per year
Category:
Technology

We found some similar jobs based on your search