Nineleaps is a boutique technology-consulting firm, that helps funded ventures and enterprises accelerate their product development and data efforts. We are 450+ people strong and based out of Bangalore, India. Over the past 7 years, our community of engineers has delivered over 200 intuitive and pragmatic solutions to our clients’ more complex challenges. We have gained multiple levels of expertise by working with market leaders, technology giants, and the latest disruptors of many industries such as Retail, e-Business, Advertising, Finance, Transportation, Healthcare, and Education.
We are looking for a Data Engineer to join our team to use to help us design, build, test, and maintain scalable custom-built ETL solutions that solve our clients’ most complex and challenging problems across different industries.
- Design, build, test, and maintain scalable custom-built ETL solutions to meet business needs.
- Contribute to the entire implementation process including driving the definition of improvements based on business needs and architectural improvements.
- Conduct root cause analysis and advanced performance tuning for complex business processes and functionality.
- Propose, pitch, sell, implement and prove success in continuous improvement initiatives.
- Ability to review frameworks and design principles to suit the project context.
- Review code for quality and implement best practices.
- Promote coding, testing, and deployment of best practices through hands-on research and demonstration.
- Be a part of the Agile ceremonies to groom stories and develop defect-free code for these stories.
- Should be capable of work breakdown and estimation of work.
- Write testable code that enables extremely high levels of code coverage.
- Mentor young engineers towards guiding them to become great engineers.
- Stakeholder management.
- Project Health Reporting as and when required
Desired Skills and Experience:
- Strong Experience in Hive, SQL, Python, Spark, Docker.
- Experience with various Python libraries like PySpark, Pandas, Numpy, etc.
- Proficiency in big data technologies and its application, good to have any of the following Hive, PIG, Spark, HBase, Kafka.
- Good understanding of Hadoop Infrastructure.
- Understanding of the threading limitations of Python and multi-process architecture.
- Rest API, Data Processing Frameworks in Python.
- Familiarity with some ORM (Object Relational Mapper) libraries.
- Good understanding of Test Driven Development – Unit and Integration testing.
- Proficient understanding of code versioning tools (GIT).
- Strong knowledge of design patterns.
- Advanced knowledge of agile methodology.
- Attention to detail and multitasking.
- Python/Java Spark
(Or) Send in your resumes to email@example.com