Data Engineer

New Yesterday

Description: JPI is hiring a Data Engineer to support federal clients in designing and implementing scalable data pipelines, cloud-native architectures, and modern data platforms that drive mission impact. This role plays a key part in our clients’ efforts to become data-driven organizations—turning fragmented, siloed data into structured insights using best-in-class technologies like AWS, Databricks, and Python-based frameworks. The ideal candidate combines technical expertise with a consulting mindset, capable of working across teams to translate data strategy into actionable solutions.
The Data Engineer will be responsible for architecting ingestion pipelines, transforming large datasets, and enabling advanced analytics across cloud and hybrid environments. This includes working with structured and unstructured data, integrating APIs, and building out high-performance data pipelines using Databricks and Spark. The role involves close collaboration with analysts, developers, and mission stakeholders to ensure solutions are agile, secure, and aligned with evolving priorities.
At JPI, we strive to empower our people and excel for our clients. We hold ourselves to high standards and prioritize our values of being one team with unwavering integrity. We are motivated by our mission and driven to deliver solutions that exceed expectations. Will you join us?
Responsibilities Include: Project Planning / Management Participate in Agile ceremonies including sprint planning, daily standups, and retrospectives. Support the development of technical documentation and briefing materials for leadership. Collaborate with clients and internal teams to align data architecture with business and mission goals. Track and maintain technical deliverables, action items, and project risks. Solution Development & Integration Lead the design and implementation of modern, scalable data pipelines (ETL/ELT) using cloud-native tooling. Design and implement scalable data pipelines and transformations using Databricks on cloud platforms. Migrate and modernize legacy data platforms to AWS (or other major cloud providers). Build reliable ingestion mechanisms for batch and real-time data. Integrate datasets across disparate platforms using APIs and custom connectors. Develop robust data models and schema designs for both transactional and warehouse systems. Provide expertise in system integration, database design, and performance optimization. Cloud Infrastructure & Programming Architect and implement cloud-based data platforms (preferably AWS; Azure or GCP acceptable). Write efficient, well-documented code in Python and SQL to support scalable data processing. Leverage orchestration tools (e.g., Apache Airflow, Luigi, or Step Functions) for pipeline automation. Leverage Databricks Notebooks, Delta Lake, and Spark to build and maintain high-performance data processing workflows. Utilize technologies such as Kafka, Spark, and Redshift for big data processing. Perform regular code reviews and enforce best practices for data governance and security. Data Analytics & Reporting Support Partner with BI developers and data analysts to enable downstream reporting tools like Power BI or Tableau. Collaborate with data scientists to provision clean, well-structured data assets. Enable data streaming, processing, and analysis of large-scale structured and semi-structured data.
Requirements: BA/BS degree with 10 years of experience, or Master’s with 5 years of experience. Active Public Trust Clearance or ability to obtain one. Strong experience in Python, SQL, and AWS services (EC2, S3, RDS, Redshift, Lambda, Glue, etc.). Hands-on experience with Databricks, including use of Notebooks, Spark clusters, Delta Lake, and integration with cloud storage. Familiarity with Databricks SQL, MLflow, and managing jobs/workflows within the Databricks workspace is preferred. Proficiency in working with both SQL and NoSQL databases. Demonstrated experience in building and deploying ETL/ELT pipelines. Working knowledge of big data tools such as Spark, Hadoop, Kafka. Experience tuning performance in Spark-based environments, particularly within Databricks runtime environments. Ability to work with API endpoints for data retrieval and system integration. Strong understanding of data modeling, schema design, and performance tuning. Experience with Agile methodologies and DevSecOps culture. Excellent verbal and written communication skills. Preferred Experience Prior experience supporting Department of Homeland Security (DHS), U.S. Coast Guard, or similar federal agencies. Hands-on experience using Databricks for large-scale data processing, Delta Lake management, and collaboration across data teams. Experience with MLOps or enabling machine learning environments using tools such as MLflow or Databricks Workflows. Familiarity with data governance and metadata management best practices in a cloud-native environment. Exposure to operationalizing data pipelines in mission-critical or high-security contexts. Transferable Skills for Management Consulting Prior experience supporting Department of Homeland Security (DHS), U.S. Coast Guard, or similar agencies. Experience with message queues, event streaming, and high-throughput data processing. Familiarity with BI tools such as Power BI, Tableau, or Qlik. Experience with MLOps or enabling machine learning environments. Knowledge of data governance practices and metadata management.
JPI is committed to creating a diverse environment and is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status.
JPI is hiring for several positions and will consider your application across our current and future openings. Expected compensation for this role is between $80,000 - $130,000, including a generous benefits package with comprehensive healthcare coverage. Please note that final compensation is dependent on a variety of factors and is reviewed regularly for both internal and external equity considerations.
Location:
Washington
Category:
Technology

We found some similar jobs based on your search