Job Information
Lenovo AI高性能通信研发资深研究员 in Beijing, China
AI高性能通信研发资深研究员
General Information
Req #
WD00066363
Career area:
Research/Development
Country/Region:
China
State:
Beijing
City:
北京(Beijing)
Date:
Wednesday, June 5, 2024
Working time:
Full-time
Additional Locations :
- China - Beijing - 北京(Beijing)
Why Work at Lenovo
We are Lenovo. We do what we say. We own what we do. We WOW our customers.
Lenovo is a US$57 billion revenue global technology powerhouse, ranked #248 in the Fortune Global 500, and serving millions of customers every day in 180 markets. Focused on a bold vision to deliver Smarter Technology for All, Lenovo has built on its success as the world’s largest PC company with a full-stack portfolio of AI-enabled, AI-ready, and AI-optimized devices (PCs, workstations, smartphones, tablets), infrastructure (server, storage, edge, high performance computing and software defined infrastructure), software, solutions, and services. Lenovo’s continued investment in world-changing innovation is building a more equitable, trustworthy, and smarter future for everyone, everywhere. Lenovo is listed on the Hong Kong stock exchange under Lenovo Group Limited (HKSE: 992) (ADR: LNVGY).
This transformation together with Lenovo’s world-changing innovation is building a more inclusive, trustworthy, and smarter future for everyone, everywhere. To find out more visit www.lenovo.com , and read about the latest news via our StoryHub (https://news.lenovo.com/) .
Description and Requirements
岗位职责:
1、负责AI高性能通信技术的架构和研发,包括不限于网络协议优化、集合通信库自研、软硬件结合以及算网一体架构等技术;
2、负责研发AI高性能通信解决方案,为业务提供卓越性能和成本效益;
3、洞察AI行业发展趋势,参与设计和研发下一代AI基础设施。
岗位要求:
1、熟练掌握C/C++语言,有5年以上系统软件研发经验;
2、精通CUDA编程,具备3年以上的GPU/NPU异构编程能力;
3、精通NCCL、MPI等集合通信相关算法和实现;
4、熟悉libfabric,UCX,libibverbs等高性能开源框架;
5、熟悉RDMA、NVLink、IB/RoCEv2、GPUDirect等高性能互联技术及相关开源实现。
加分项:
1、熟悉在网计算(如NVIDIA SHARP等)基本工作机制和优化方法;
2、熟悉Pytorch,Tensorflow等深度学习框架;
3、有大规模AI或HPC集群性能模拟器研发和使用经验;
4、在高性能互联相关国际会议期刊上有高水平论文发表。
Additional Locations :
China - Beijing - 北京(Beijing)
China
China - Beijing
China - Beijing - 北京(Beijing)