Intel® oneAPI Toolkits
Intel® oneAPI products deliver the freedom to develop with a unified toolset and to deploy applications and solutions across CPU, GPU, and FPGA architectures. Native code toolkits implement oneAPI industry specifications and primarily focus on Data Parallel C++ (DPC++), C++, C and Fortran code development. Data science and AI toolkits support machine learning and deep learning developers who primarily use Python* and AI frameworks.
Native Code Toolkits
Intel® oneAPI Base Toolkit - This foundational base toolkit enables the building, testing, and optimizing of data-centric applications across XPUs.
Intel® oneAPI HPC Toolkit Intel® oneAPI IoT Toolkit Intel® oneAPI Rendering Toolkit - Toolkits include the new DPC++ programming language, as well as familiar C, C++, and Fortran languages.
- Domain-specific toolkits support specialized workloads.
Intel® oneAPI HPC Toolkit
High-performance computing (HPC) is at the core of artificial intelligence, machine learning, and deep learning applications. The Intel® oneAPI HPC Toolkit delivers what developers need to build, analyze, optimize, and scale HPC applications with the latest techniques in vectorization, multithreading, multi-node parallelization, and memory optimization.
This toolkit is an add-on to the Intel® oneAPI Base Toolkit, which is required for full functionality. It also includes access to the Intel® Distribution for Python*, the Intel® oneAPI DPC++/C++ Compiler, powerful data-centric libraries, and advanced analysis tools.
Features
Build Simplify implementation of HPC applications on CPUs and accelerators with Intel’s industry-leading compiler technology and libraries.
Analyze Quickly gauge how your application is performing, how resource use impacts your code, and where it can be optimized to ensure faster cross-architecture performance.
Scale Deploy applications and solutions across shared memory and distributed memory (such as clusters) computing systems using the included standards-driven MPI library and benchmarks, MPI analyzer, cluster tuning tools, and cluster health-checking tools.
What's Included
Use this standards-based C++ compiler with support for OpenMP* to take advantage of more cores and built-in technologies in platforms based on Intel® Xeon® Scalable processors and Intel® Core™ processors.
Compile and optimize DPC++ code for CPU, GPU, and FPGA target architectures.
Speed up data parallel workloads with these key productivity algorithms and functions.
Intel® Advisor (viewer only) The viewer is part of the macOS* download.
Verify that cluster components work together seamlessly for optimal performance, improved uptime, and lower total cost of ownership.
Generate optimized, scalable code for Intel® Xeon® Scalable processors and Intel® Core™ processors with this standards-based Fortran compiler with support for OpenMP*. This standards-based Fortran compiler includes support for OpenMP that provides continuity with existing CPU-focused workflows.
Locate and debug threading, memory, and persistent memory errors early in the design cycle to avoid costly errors later.
Deliver flexible, efficient, scalable cluster messaging on Intel® architecture.
Understand MPI application behavior across its full runtime.
Intel® VTune™ Profiler (viewer only) The viewer is part of the macOS download.
Key Specifications
Processors: - Intel® Xeon® processors
- Intel® Xeon® Scalable processors
- Intel® Core™ processors
GPUs: - Intel® Processor Graphics Gen9
Operating systems: † Not all Intel oneAPI HPC Toolkit components are available for macOS. The following components are included: Intel® C++ Compiler Classic and Intel® Fortran Compiler Classic.
Languages: - Data Parallel C++ (DPC++)(Note: Must have Intel oneAPI Base Toolkit installed)
- C and C++
- Fortran(Note: Requires Microsoft Visual Studio* on Windows)
- Python*(Note: Must have Intel oneAPI Base Toolkit installed)
Development environments: - Compatible with compilers from Microsoft, GCC, Intel, and others that follow established language standards
- Windows: Microsoft Visual Studio*
- Linux: Eclipse*
Distributed environments: Open Fabrics Interfaces (OFI) framework implementation supporting the following: - InfiniBand*
- iWARP, RDMA over Converged Ethernet (RoCE)
- Amazon Web Services Elastic Fabric Adapter (AWS EFA)
- Intel® Omni-Path Architecture (Intel® OPA)
- Ethernet, IP over InfiniBand (IPoIB), IP over Intel OPA
Intel® oneAPI Toolkits Comparison
|