Intel 100HFA016LS Internal QSFP28 interface cards/adapter

Intel 100HFA016LS Internal QSFP28 interface cards/adapter

EAN: 735858313445
MPN: 100HFA016LS
发送方式:
交货来源:
德国
更新价格... 📣 询问价格 Не поставляется
运输成本:
От

凡购买和价格 (Advertising *)

顶部
в наличии
* Alle Preise inkl. der jeweils geltenden gesetzlichen Mehrwertsteuer, ggfs. zzgl. Versandkosten. Alle Angaben ohne Gewähr. Preisänderungen sind in der Zwischenzeit möglich.

技术特点

顶部

性能

带宽 25 Gbit/s
数据传输率(最大) 100 Gbit/s

技术细节

ARK ID 92007

能量控制

功耗(一般操作) 10.6 W

端口 & 界面

PCIe版本 3.0
內置式 Y
主机接口 PCIe
PCIe卡的外形因素 Low-profile
光纤连接器 QSFP28
输出接口 QSFP28

重量和尺寸

重量 240 g
宽度 18.42 mm
高度 79.2 mm

另外

产品色彩 Green, Grey
Omni-Path Host Fabric Interface Adapter 100 Series 1 Port PCIe x16 Low Profile Intel® Omni-Path Host Fabric Interface (HFI)
Designed specifically for HPC, the Intel® Omni-Path Host Fabric Interface (Intel® OP HFI) uses an advanced connectionless design that delivers performance that scales with high node and core counts, making it the ideal choice for the most demanding application environments. Intel OP HFI supports 100 Gbps per port, which means each Intel OP HFI port can deliver up to 25 GBps per port of bidirectional bandwidth. The same ASIC utilized in the Intel OP HFI will also be integrated into future Intel® Xeon® processors and used in third-party products.

Each HFI supports:
- Multi-core scaling – support for up to 160 contexts;
- 16 Send DMA engines (M2IO usage);
- Efficiency – large MTU support (4 KB, 8 KB, and 10KB) for reduced per-packet processing overheads. Improved packet-level interfaces to improve utilization of on-chip resources.
- Receive DMA engine arrival notification;
- Each HFI can map ~128 GB window at 64 byte granularity;
- Up to 8 virtual lanes for differentiated QoS;
- ASIC designed to scale up to 160M messages/second and 300M bidirectional messages/second.

Intel® Omni-Path Host Fabric Interface (HFI) Optimizations
Much of the improved HPC application performance and low end-to-end latency at scale comes from the following enhancements:

Enhanced Performance Scaled Messaging (PSM).
The application view of the fabric is derived heavily from—and application-level software compatible with—the demonstrated scalability of Intel® True Scale Fabric architecture by leveraging an enhanced next generation version of the Performance Scaled Messaging (PSM) library. Major deployments by the US Department of Energy and other have proven this scalability advantage. PSM is specifically designed for the Message Passing Interface (MPI) and is very lightweight—one-tenth of the user space code—compared to using verbs. This leads to extremely high MPI and Partitioned Global Address Space (PGAS) message rates (short message efficiency) compared to using InfiniBand* verbs.

“Connectionless” message routing.
Intel® Omni-Path Architecture—based on a connectionless design—does not establish connection address information between nodes, cores, or processes while a traditional implementation maintains this information in the cache of the adapter. As a result, the connectionless design delivers consistent latency independent of the scale or messaging partners. This implementation offers greater potential to scale performance across a large node or core count cluster while maintaining low end-to-end latency as the application is scaled across the cluster.
Фотографии

    密码恢复
    要恢复您的密码,请在下面您的电子邮件地址框与您已注册请输入:
    The password reset code has been sent to your Email.
    Код уже был отправлен Вам ранее.
    Вы можете ввести его в поле выше, или получить новый код через сек.
    发生了错误。请检查您的电子邮件地址,然后再试一次。
    Ваш новый пароль:

    名称为空


    Выберите страну доставки

    您还没有写消息

    By clicking on the "Send" button, you agree that your data will be used to process your request. Further information and revocation instructions can be found in the data protection declaration.

    已发送您的消息!

    親密

    1
    产品目录
    取消
    Бренды:
      Выберите бренды
        查看更多
          地區搜索
          全球
          Категории
            产品名称