Cloud Data Engineer

Job Description

Job Location: Lewisville, TX
Job Duration: Full time
Hours: 9:00am to 5:00pm Weekdays (Monday – Friday)

The Cloud Data Engineer is responsible for designing, building, and optimizing scalable data solutions in cloud environments. This role supports strategic data initiatives across the enterprise by engineering secure and efficient pipelines that enable advanced analytics, real-time insights, and data-driven decision-making.

Minimum Work Experience: 1 year of experience

Minimum Degree Requirement:

Bachelor’s degree or equivalent in Computer Science, Information Technology or Engineering or closely related field

Job Responsibilities and Duties:

Data Pipeline Design & Development

  • Build scalable, cloud-native data pipelines using tools such as Azure Data Factory, AWS Glue, or Google Cloud Dataflow
  • Implement robust data ingestion and transformation processes aligned with business needs
  • Manage structured and unstructured data workflows from diverse sources

 

Cloud Architecture & Infrastructure

  • Design and maintain data architectures optimized for cloud platforms (Azure, AWS, or GCP)
  • Leverage distributed systems technologies (e.g., Spark, Databricks, BigQuery) for performance and scalability
  • Utilize Infrastructure as Code (IaC) tools like Terraform for resource provisioning and automation

Data Governance & Compliance

  • Ensure compliance with data privacy standards (GDPR, HIPAA, etc.) and internal governance policies
  • Implement role-based access controls and encryption mechanisms to safeguard sensitive data
  • Collaborate with Information Security teams to maintain audit-ready environments

 

Collaboration & Stakeholder Support

  • Partner with data scientists, analysts, and business stakeholders to understand use case requirements
  • Translate analytical needs into technical specifications for data processing workflows
  • Support real-time and batch data analytics solutions to empower data-driven decision-making

 

Performance Optimization & Maintenance

  • Monitor data pipeline performance, ensuring high availability and minimal latency
  • Conduct root cause analysis and troubleshoot pipeline failures
  • Apply best practices for code efficiency, cost optimization, and scalability

 

Documentation & Standardization

  • Create and maintain technical documentation for data flows, architecture diagrams, and workflow specifications
  • Contribute to internal standards for coding, version control, and pipeline design
  • Support the development of reusable components and frameworks for broader team adoption

Requirements

  • 1 years of experience in data engineering
  • 1 year of experience in Azure Data Factory or AWS Glue or similar technology
  • 1 year of experience in Python
  • 1 year of experience in SQL
  • 1 year of experience in one or more cloud platforms (Azure, AWS, GCP)
  • 1 year of experience with CI/CD practices and version control systems (e.g., Git)

Apply Now

Upload resume
Max file upload size : 2MB

Please fill the to form below form to get callback from ou team