Engineer – PySpark / AWS
This role focuses on building and maintaining scalable data pipelines, data lakes, and data warehouses to support advanced analytics and machine learning initiatives. The position requires hands-on experience with PySpark, AWS, SQL, and big data technologies to process large-scale datasets securely and efficiently. You will collaborate closely with data scientists and cross-functional teams while contributing to continuous improvement and risk-controlled delivery.