Data Engineer - Python & PySpark

New Today

Location:
McLean
Salary Range:
Competitive and commensurate with experience
Introduction
We are seeking a skilled professional to join our dynamic team. This role involves developing and optimizing ETL pipelines using cutting-edge technologies. The ideal candidate will have a strong background in data engineering and a passion for working with large-scale datasets.
Required Skills & Qualifications 8-10 years of experience in data engineering Proficiency in Python and PySpark Experience with AWS EMR and distributed computing solutions Strong understanding of Spark RDD, Data Frames, and Datasets Preferred Skills & Qualifications Experience with IICS jobs for data ingestion and transformation Familiarity with CI/CD, testing, and automation in a cloud environment Ability to optimize performance and cost efficiency for big data solutions Day-to-Day Responsibilities Develop and optimize ETL pipelines using Python, PySpark, and PySpark Notebooks Design and implement data ingestion, transformation, and processing workflows Collaborate with data engineers, data scientists, and business teams to deliver insights Monitor job performance, troubleshoot failures, and tune queries Company Benefits & Culture Inclusive and diverse work environment Opportunities for professional growth and development Supportive team culture that values work-life balance
For immediate consideration, please click APPLY or contact me directly at first.last@artech.com or call me at my work phone.
Location:
Falls Church, VA, United States
Category:
Computer And Mathematical Occupations

We found some similar jobs based on your search