Skip to main content

What to support if you are supporting

·1709 words·9 mins

Dependencies matter a lot in terms of who can easily install and use your software. However there are trade-offs in what features are provided by each version and the availability of these versions. This presents an opinionated take on what these trade-offs are so one can know what can be widely supported.

Major Linux Distros #

DistroStandard EoLExtended
CentOS-8 compatibleDecember 2021*2029
CentOS-9 streamMay 2027*2032
CentOS-10 streamDecember 2030*2038
OpenSUSE Leap 15.6December 2025
Ubuntu 22.04April 2027
Ubuntu 24.04April 2029
  • Ubuntu LTS releases generally get 5 years of standard support
  • Fedora releases generally get ~1 year of support
  • * Third party vendors such as AlmaLinux support these for much longer 2029 for CentOS-8, 2032 for CentOS-9, 2038 for Cento-10.

Tooling Versions #

ToolMinimum Sandard EoLUbuntu 22.04Ubuntu 24.04CentOS 9 StreamCentOS 10 StreamSUSELeapFedoraCentOS 8
EOLCurrentExtended
gcc11.311.4.013.2.011.314.2.17.5.0^ to 1415.0.18.5.0
clang16.014.0.018.1.316.020.1.217.0.620.1.215.0.0
cmake3.203.22.13.28.33.203.30.53.28.33.31.63.20
python33.6.153.10.03.12.33.9,3.113.12.103.6.153.13.33.6-3.9
julian/an/an/an/an/a1.0.3#1.11.0-rc3n/a
cargo1.66.11.66.11.75.01.61.11.85.01.82.01.86.01.66.1
swig3.0.124.04.2.03.0.124.3.0 $4.1.14.3.03.0.12
nvcc *11.511.5.012.0.140n/a *n/a *n/a *n/a *n/a *
numpy1.17.31.21.51.26.41.20.11.26.41.17.32.2.41.14.3

#1 has known issues and upstream recommends avoiding using this version * CentOS, and Fedora do not package CUDA themselves, but instead rely on Nvidia to provide the package which provides the newest version. ^ OpenSUSE Leap provides many gcc compilers, the default is 7.5.0 $ CentOS provides some packages in the code-ready builder or EPEL repositories

Language Features #

C++ #

For more information consult cppreference.com C++ is divided into language and library features. Generally Language features are implemented before library features. While Clang originally led for compliance, increasingly GCC is getting newer features sooner.

C++14 #

You can safely assume that C++14 is supported on all major distributions.

CompilerC++14 (language full)C++14 (language 90%)MissingC++14 (library full)C++14 (library 90%)Missing
GCC/libstdc++55N/A105partial support for null forward iterators (N3644)
Clang/libc++3.43.4N/A3.43.4N/A

C++17 #

C++17 language features are supported on all major distributions.

C++17 library features require very new compilers to implement fully and are not widely available on LTS systems. The most common things to lag behind being parallel parallel algorithms, so-called special math functions used in statistics, and OS assisted features like hardware interference size. In many cases these can be “poly filled”

CompilerC++17 (language full)C++17 (language 90%)MissingC++17 (library full)C++17 (library 90%)Missing
GCC/libstdc++77N/A129“elementary string conversions” (P0067R5),
Clang/libc++44N/ANo17parallel algoirthms, hardware interference size, special math functions

C++20 #

C++20 language features are not not fully implemented in even the newest compilers. The biggest holdout is modules and features like consteval, but compilers that implement 90% of the features are present in recent LTSes.

C++20 library features are more sparsely implemented, but are now fully implemented in GCC 14 and 90% implemented as of clang 18 starting to be available in “cutting edge distros” such as Fedora and some LTSes.

CompilerC++20 (language full)C++20 (language 90%)MissingC++20 (library full)C++20 (library 90%)Missing
GCC/libstdc++11*10Only partial support for Modules add in 11, using enum, operator<=>(...)=default, consteval1411calendar, text formatting, atomics
Clang/libc++No17Modules, Coroutines, Non-type template parametersNo18atomics, source location, operator<=>

* full support for modules is the lone holdout.

C++23 #

Bleeding edge compilers now have 90% support for C++23 language features.

Library features are not widely implemented in compilers, however some major constexpr features (e.g. constexpr unique_ptr, optional, variant) as well as some new vocabulary types like std::expected now have support.

CompilerC++23 (language full)C++23 (language 90%)MissingC++23 (library full)C++23 (library 90%)Missing
GCC/libstdc++No15lifetime extensions for range for loops, scope for training lambda return typesNoNolots
Clang/libc++No19sized float types, CTAD from inherited constructors, pointers in costexprs,NoNolots

C++26 #

It is too early to start looking at C++26 compiler conformance. GCC is roughly 50% implemented for language support as of GCC15 and Clang21, but most library feature remain unimplemented.

CMake #

Every major non-EoL distribution supports at least CMake 3.16. If you need to do things with CUDA try to stick to CMake 3.20 or newer which has much more robust support for GPU programs which is available on all distributions except CentOS7. 3.25 is needed for cuFILE APIs which is only available on more cutting edge distros.

  • 3.10 Added flang,ccachefor Ninja, GoogleTestgtest_discover_tests()`
  • 3.11 Added add_library without sources, FindDoxygen, `
  • 3.12 Added cmake --build, <PackageName>_ROOT for find_package, many improvements to FindSwig
  • 3.13 Added cmake -S ... -B ... to set source and build dirs, target_link_libraries can now be called on targets from different directories., more improvements to Swig
  • 3.14 Added get_filename_component(), install(FILES)/install(DIRECTORIES) now uses GNUInstallDirs by default, FtechContent_MakeAvailable(), numpy support in FindPython
  • 3.15 Improved Python lookups and Python3::Module, $<COMPILE_LANGUAGE:> with a single language, --install
  • 3.16 Added support for unity builds
  • 3.17 Added Ninja multi-config, CMAKE_FIND_DEBUG_MODE, FindCUDAToolkit,
  • 3.18 add_library can now create Alias targets (useful for FetchContent), CMAKE_CUDA_ARCHITECTURES, FindLapack imported target, improvements to GoogleTest test discovery
  • 3.19 Apple Silicon, CheckCompilerFlag generalizes C/C++ specific versions
  • 3.20 CUDAARCHES environment variable added, Nvidia HPC SDK, OneAPI compilers, improved FindCudaToolkit with ccache, better implicit dependencies.
  • 3.21 cmake-presets, Fujitsu compiler
  • 3.22 CMAKE_BULID_TYPE default variable
  • 3.23 improvements to presets, HEADER_SETS for IDE integration, many improvements to CUDA seperate compilation
  • 3.24 improvements to presets, CMAKE_CUDA_ARCHITECTURES=native, fetch content can now try find_package first
  • 3.25 improvements to presets, block scoping, try_compile doesn’t need a binary directory, cufile
  • 3.26 imagemagick imported targets, Python “Stable ABI” targets
  • 3.27 Cuda Object libraries, FindDoxygen config file support, FindCUDA (old and deprecated) removed
  • 3.28 Cmake modules support is stable but not supported compilers until GCC 14 and LLVM 16 and using the Ninja generator, CMAKE_HIP_PLATFORM to compile hip code using Nvidia GPUs, CrayClang, many commands became JOB_SEVER_AWARE to better support nested builds.
  • 3.29 CMAKE_INSTALL_PREFIX env variable, ctest jobserver support
  • 3.30 c++26 support, Backtrace module support, free threaded python support
  • 3.31 OpenMP CUDA support, cmake_pkg_config to avoid native pkg-config dependency
  • 4.0 likely breaking due to before deprecated functions since 3.5 removed. cmake --project-file to allow maintaining a second cmake build system to facilitate staged updates.

Swig #

Swig 3 is widely available, but Swig 4 is not – try to avoid C++14 in swig wrapped interfaces.

  • 4.3 Improvements for std::filesystem, std::unique_ptr, std::string_view, fold expressions and training return types
  • 4.2 Many improvements for std::array, std::map, std::string_view, python3.12
  • 4.1 Added move semantics support improved, many new languages supported (e.g. Node 18, Php 8, Python 3.11)
  • 4.0 Added C++11 STL container support to Python, and better support for C++14 code
  • 3.0 Added C++11 language support

Cuda #

CUDA places requirements on your GCC/clang version, but also places requirements on supported hardware. This note has much more detail.

When using NVCC:

cuda versionmax gccsm versions
8.15.32-6.x
9.164-7.2
11.5113.5-8.6
12.012.14-9
12.1-312.24-9
12.1-312.24-9
12.4-513.24-9
12.4-613.24-9
12.814.04-12

It is also possible to use clang++ to compile cuda

clang versioncuda releasesm versions
67-93-7.0
107-10.13-7.5
147-11.03-8.0
157-11.53-8.6
167-11.83-9.0
187-12.23-9.0a
187-12.23-9.0a
197-12.53-9.0a
207-12.63-10.0

In newer versions of cuda, this command outputs your compute version.

nvidia-smi --query-gpu=compute_cap --format=csv

Python #

For widest compatibility, avoid features newer than 3.6, however when CentOS7 is EoL, 3.8 is the next lowest common denominator.

  • 3.6 Added fstrings, types for variables, async generators, PYTHONMALLOC and more.
  • 3.7 Added breakpoint(), @dataclass, time.perf_counter_ns() and more
  • 3.8 Added := operator, position only parameters, fstring {var=} syntax, and more
  • 3.9 Added | for dict, list instead of List in types, and is much faster and more
  • 3.10 Added match pattern matching, parenthesized context managers, type | operator, and more
  • 3.11 Added exception groups, tomllib, variatic generics, Self type, string literal type, is much faster and more
  • 3.12 Added Path.walk, improved f-strings, type alias, sys.monitoring, collections.abc.Buffer
  • 3.13 Added improved interpreter and error messages, jit bytecode interpreter for faster hot functions, copy.replace. Experimental support for noGIL python
  • 3.14 Deferred evolution of annotations, improved python debugging

Manylinux #

Python’s pip uses manylinux containers to provide broadly compatible binaries for use with python.

VersionGCCPythonBase
manylinux_2_28123.8.10+, 3.9.5+, 3.10.0+Almalinux 8

PEP 600 defines manylinux_x_y where x==glibc_major version, y==glibc_minor_version. There are docker containers that provide build envionments for these packages that should be preferred. One should also check the auditwheel command to ensure that the compiled library does not link to a disallowed library.

Numpy #

  • 1.17 __array_function__ support, random module made more modular
  • 1.18 64 bit BLAS/LAPACK support
  • 1.19 dropped support for python < 3.6
  • 1.20 Numpy added typing support, wider use of SIMD, start of dtype refactor
  • 1.21 more type annotations, more SIMD
  • 1.22 most of main numpy is typed, array api and C support for dlpack supported
  • 1.23 python support for dlpack
  • 1.26 support array_api v0.2022.12, but fft not supported for now; new build flags
  • 2.0 many changes to the public/private api, changes to C functions, many performance improvements
  • 2.1 support for array_api v2023.12, prelimiary support for GIL free python
  • 2.2 improved support for GIL free python, improved use of BLAS for matrix vector products

Julia #

Generally Julia installations will be relatively new since they are often not provided by the package manager. There have been significant reductions in time to first plot in recent versions

  • 1.5 threading, @ccall
  • 1.6 various quality of life improvements
  • 1.7 property destructuring, @atomic, reproducible RNG, libblastrampoline
  • 1.8 const on fields in mutable structs, SIGUSR1 profiling
  • 1.9 :interactive threads, jl_adopt_thread, Iterators.flatmap, Package Extensions
  • 1.10 CartesianIndex can now broadcast, vastly improved package compilation times
  • 1.11 new Memory type, public keyword, @main for an opt-in entrypoint, greedy Threads.@threads
  • 1.12 @atomic :monotonic, --task-metrics

Hope this helps!

Changelog #

  • 2025-04-17: comprehensive updates
  • 2024-08-28: comprehensive updates, addeed numpy to tracking
  • 2024-02-22: added python and cuda and updated cmake
  • 2023-09-12: Created
Author
Robert Underwood
Robert is an Assistant Computer Scientist in the Mathematics and Computer Science Division at Argonne National Laboratory focusing on data and I/O for large-scale scientific applications including AI for Science using techniques of lossy compression, and data management. He currently co-leads the AuroraGPT Data Team with Ian Foster. In addition to AI, Robert’s library LibPressio, allows users to experiment and adopt advanced compressors quickly, has over 200 average unique monthly downloads, is used in over 17 institutions worldwide, and he is also a contributor to the R&D100 winning SZ family of compressors and other compression libraries. He regularly mentors students and is the early career ambassador for Argonne to the Joint Laboratory for Extreme Scale Computing.