Senior AWS Data Engineer
enable-data
Plano, United States
December 07, 2025
Apply Now
Share this job SVGs not supported by this browser. Description At Enable Data Incorporated, we are excited to welcome a talented Senior AWS Data Engineer to join our dynamic team! With our extensive knowledge in application, data, and cloud engineering services, we strive to create groundbreaking solutions that provide real value to our clients. In this role, you will have a fantastic opportunity to design and implement resilient data architectures using AWS technologies, paving the way for data-driven decision-making. Key Responsibilities- Design and Implementation: Design, develop, and implement efficient and reliable data pipelines and ETL processes using PySpark for large-scale data processing in a distributed environment. Feature Engineering: Extract, cleanse, and transform raw data into a format optimal for ML models, creating new features that enhance model accuracy and performance. AWS Integration: Leverage a variety of AWS services, such as Amazon S3 (for data storage), AWS Glue (cataloging), Amazon EMR (for running Spark clusters), and Amazon SageMaker (for ML integration and feature stores) to build and deploy solutions. Optimization: Optimize existing PySpark applications and data pipelines for performance, cost-efficiency, and scalability. Collaboration: Work in an Agile team environment, collaborating with data scientists, data architects, and software engineers to understand data requirements and deliver integrated data solutions. Code Quality & MLOps: Write clean, maintainable, and well-documented production-level code, participating in code reviews and implementing CI/CD practices where appropriate. Requirements Qualifications: Experience: Proven, hands-on experience in data engineering or a related software engineering role, with a focus on big data technologies. Programming : Strong proficiency in Python and expert knowledge of PySpark, including Spark SQL and data manipulation techniques. Cloud Platforms: Solid understanding and practical experience with AWS cloud services related to data engineering (e.g., S3, EMR, SageMaker). Databases & SQL: Proficiency in SQL and experience working with various database systems (relational and/or NoSQL). Big Data Concepts: Deep understanding of distributed computing concepts, data modeling, data lake design principles, and big data frameworks. Tools : Familiarity with orchestration tools such as Apache Airflow, and version control systems (Git). Strong analytical and problem-solving abilities, with a focus on detail and accuracy. Excellent communication and teamwork skills to collaborate effectively with cross-functional teams. Apply for this job
Apply Now