Nineleaps is a boutique technology-consulting firm, that helps funded ventures and enterprises accelerate their product development and data efforts. We are 450+ people strong and based out of Bangalore, India. Over the past 8 years, our community of engineers has delivered over 200 intuitive and pragmatic solutions to our clients’ more complex challenges. We have gained multiple levels of expertise by working with market leaders, technology giants, and the latest disruptors of many industries such as Retail, e-Business, Advertising, Finance, Transportation, Healthcare, and Education.

We are looking for a Senior Data Engineer to join our team to use to help us design, build, test, and maintain scalable custom-built ETL solutions that solve our clients’ most complex and challenging problems across different industries.

Roles and Responsibilities :

  • Design, build, and maintain efficient and scalable data pipelines that collect,
    transform, and load data into Hive.
  • Write ad-hoc spark jobs/presto/hive queries for various ad-hoc analyses.
  • Develop Python scripts to collect and ingest third-party data into data lakes
  • Write various tests to detect various data quality issues like duplicates, null counts, data drifts, etc.
  • Proactively identify opportunities to optimize the pipelines.
  • Communicate with multiple stakeholders to gather requirements / understand the scope of the work.
  • Work with upstream and downstream teams to ensure end-to-end execution of the data pipelines
  • Maintain Various Dashboards of business-critical metrics

Desired Skills and Experience:

  • Bachelor’s in Computer Science or equivalent with 5+ years of experience.
  • Programming experience in one or more of these languages/frameworks: Python, SQL.
  • Well-versed with Hive, Presto, Spark, PySpark, and Apache Airflow frameworks.
  • Passion for working with customers and helping them with their use cases.
  • A solid understanding of information security standards & methodologies.
  • A good understanding of large-scale distributed systems in practice, including multi-tier architectures, application security, monitoring, and storage systems.
  • Strong algorithms/data structures experience.
  • Experience extending and implementing core functionality and libraries in data processing platforms (Hive/Pig UDFs, Spark / Spark SQL, Apache Samza, Kafka, etc).
  • Experience working with collaboration and SCM tools such as Jira, Git, etc
  • A commitment to writing understandable, maintainable, and reusable software.
  • Experience working on designing and developing front-end components with a focus on innovative customer interactions.
  • Good to have prior experience in building various dashboards using tools like
    Google Dash Studio, Tableau, and Grafana
  • Solid communication skills

(Or) Send in your resumes to


Similar Jobs

Data Analyst

Data Engineer