Hadoop开发工程师
北京易鲸捷信息技术有限公司
- 公司规模:150-500人
- 公司性质:创业公司
- 公司行业:计算机软件
职位信息
- 发布日期:2017-10-22
- 工作地点:北京-朝阳区
- 工作经验:无工作经验
- 学历要求:本科
- 职位月薪:1-5万/月
- 职位类别:高级软件工程师
职位描述
职位描述:
Key Responsibilities:
1)Master, enhance & influence Hadoop platforms meet our goals in performance, scalability and supporting highly concurrent mixed workloads, for cloud infrastructure, and compatible with container-based architecture
2)Optimize and tune Hadoop stacks to achieve breakthrough performance and latency
3)Collaborate with other engineers and open source communities through the product lifecycle, including architecture, product support issues, beta testing, bug fixes, etc.
4)Ensure your work promotes product quality, reliability, and supportability
5)The opportunity to go outside your normal duties and contribute to our blog, speak at meet-ups, conferences, contribute in open source communities that promote our technology vision.
Qualifications
? B.S, M.S. or Ph.D. in Computer Science, Computer Engineering, Distributed Systems or related field.
? 10 - 15 years of experience with large-scale database systems hardware design and engineering, including an implementation of known technologies on the market such as Teradata, Greenplum, Netezza, DB2, or Oracle, etc.
? At least 5+ years of experience developing one or more open source technologies such as Apache HDFS, Apache HBase, Apache Hive, RocksDB, Memcached and Redis. Current Committers or PMs a plus.
? Direct product development experience with at least one commercial Hadoop distro (Cloudera / Hortonworks / MapR).
? Experience in large scale distributed/clustered storage design with emphasis on performance optimization, fault tolerance, and trouble shooting. Familiar with software defined storage and in-memory storage computing technologies.
? Deep technical knowledge Linux file systems (VFS, ext4, XFS), file-system concepts (buffer cache, journaling) and good grasp of Linux OS, IO, networking, multithreading concepts
? Working knowledge of data management and data reduction technologies like snapshots, thin provisioning, de-duplication, compression, erasure coding, etc.
? Strong software engineering skills with efficient, maintainable and testable C/C++/Java.
? Experience with modern tools to develop software in an agile and efficient manner (e.g. git, Jenkins, Jira, Maven, etc.)
Why Esgyn?
At Esgyn, we’ve charged ourselves with one mission: to empower enterprise IT and data analysts to realize the potential of Big Data with true enterprise readiness in simplicity, scale, security, speed and TCO. Ultimately, we strike to become the thought leader and technology provider in HTAP and Translytics database transformations to disrupt the transforming database market.
Esgyn is a fast-growing start up with engineers who have devoted three decades of their innovations in database technologies, and have nurtured an engineer-friendly culture and produced multiple generations of commercial database products on the market.
Join our team and experience the disruptive Big Data solution at a scale and pace not seen before!
Key Responsibilities:
1)Master, enhance & influence Hadoop platforms meet our goals in performance, scalability and supporting highly concurrent mixed workloads, for cloud infrastructure, and compatible with container-based architecture
2)Optimize and tune Hadoop stacks to achieve breakthrough performance and latency
3)Collaborate with other engineers and open source communities through the product lifecycle, including architecture, product support issues, beta testing, bug fixes, etc.
4)Ensure your work promotes product quality, reliability, and supportability
5)The opportunity to go outside your normal duties and contribute to our blog, speak at meet-ups, conferences, contribute in open source communities that promote our technology vision.
Qualifications
? B.S, M.S. or Ph.D. in Computer Science, Computer Engineering, Distributed Systems or related field.
? 10 - 15 years of experience with large-scale database systems hardware design and engineering, including an implementation of known technologies on the market such as Teradata, Greenplum, Netezza, DB2, or Oracle, etc.
? At least 5+ years of experience developing one or more open source technologies such as Apache HDFS, Apache HBase, Apache Hive, RocksDB, Memcached and Redis. Current Committers or PMs a plus.
? Direct product development experience with at least one commercial Hadoop distro (Cloudera / Hortonworks / MapR).
? Experience in large scale distributed/clustered storage design with emphasis on performance optimization, fault tolerance, and trouble shooting. Familiar with software defined storage and in-memory storage computing technologies.
? Deep technical knowledge Linux file systems (VFS, ext4, XFS), file-system concepts (buffer cache, journaling) and good grasp of Linux OS, IO, networking, multithreading concepts
? Working knowledge of data management and data reduction technologies like snapshots, thin provisioning, de-duplication, compression, erasure coding, etc.
? Strong software engineering skills with efficient, maintainable and testable C/C++/Java.
? Experience with modern tools to develop software in an agile and efficient manner (e.g. git, Jenkins, Jira, Maven, etc.)
Why Esgyn?
At Esgyn, we’ve charged ourselves with one mission: to empower enterprise IT and data analysts to realize the potential of Big Data with true enterprise readiness in simplicity, scale, security, speed and TCO. Ultimately, we strike to become the thought leader and technology provider in HTAP and Translytics database transformations to disrupt the transforming database market.
Esgyn is a fast-growing start up with engineers who have devoted three decades of their innovations in database technologies, and have nurtured an engineer-friendly culture and produced multiple generations of commercial database products on the market.
Join our team and experience the disruptive Big Data solution at a scale and pace not seen before!
职能类别: 高级软件工程师
关键字: hadoop
公司介绍
易鲸捷自创立以来,始终坚持信息技术的自主创新,引领数据库技术革命,打造具有中国自主知识产权的大数据平台核心产品和技术,致力于解决企业、政府、互联网行业在大数据时代面临的复杂数据处理、分析与应用问题,为全面建设国家数据基础设施提供关键技术支撑。
易鲸捷全球总部位于贵阳国家大数据实验区,在北京、上海以及美国硅谷设有分支机构。
易鲸捷在数据库核心技术研发领域拥有20多年的技术积累和沉淀,现有近百位全球顶尖的数据库研发人才。其中多数技术人员平均在数据库领域拥有超过10年以上的研发经验,最资深员工已从事数据库研发工作20多年。掌握国际领先的核心技术百余项。
易鲸捷全球总部位于贵阳国家大数据实验区,在北京、上海以及美国硅谷设有分支机构。
易鲸捷在数据库核心技术研发领域拥有20多年的技术积累和沉淀,现有近百位全球顶尖的数据库研发人才。其中多数技术人员平均在数据库领域拥有超过10年以上的研发经验,最资深员工已从事数据库研发工作20多年。掌握国际领先的核心技术百余项。
联系方式
- 公司地址:地址:span汇宾大厦