We are looking for an experienced and highly skilled Software Engineer(Data) to design, implement, and optimize large-scale data systems. The ideal candidate has a proven track record of building efficient data pipelines, managing big data systems, and working collaboratively with cross-functional teams to deliver data-driven solutions.
Responsibilities
● Design and maintain scalable, reliable data pipelines and workflows.
● Design data models and implement schemas to support business objectives.
● Monitor and optimize database performance and query efficiency.
● Collaborate with senior team members to design data models and schemas.
● Perform data cleansing, preparation, and validation to ensure data quality.
Required Skills/Qualifications
● Education: Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field.
● Technical Skills:
○ Advanced proficiency in Python and SQL.
○ Strong experience with data processing tools/frameworks like Pandas, Polars, Apache Spark, etc
○ Hands-on expertise with cloud platforms (e.g., AWS, GCP, Azure).
○ Proficiency in ETL tools and workflow orchestration tools like Apache Airflow.
○ Have solid experience in data modelling.
● Experience: 2+ years of experience in data engineering or related fields.
● Soft Skills:
○ Strong analytical skills and the ability to troubleshoot complex issues.
○ Leadership skills to guide junior team members and drive team success.
Preferred Skills/Qualifications
● Education: Bachelor’s degree in Computer Science, Information Technology, or related field.
● Technical Skills:
○ Experience with containerization tools like Docker.
○ Knowledge of streaming data technologies (e.g., Kafka).
○ Experience in Snowflake/Databricks.
● Experience: 2+
Key Performance Indicators:
Deliver highly optimized and scalable data pipelines within defined timelines.
Should be able to design efficient data models.