C++23 with GCC 15
GCC 15 is out in 2025 with a wave of support for new language features from C++23.
Listed below are some of the highlights:
Std Library Module
In GCC 14 the std and stdcompat where not included as they were not part of the c++20 standard, (although MVC provided an implementation, and Clang reserved the module names) requiring early adopters to implement there own wrapper module and iron out each compiler’s specific quirks.
With GCC 15 and C++23, standard library modules are now ready built out of the box via the -fmodules feature flag, paving the way to forward to module only projects.
For cmake>=4.0.0, this is simple to setup:
CMakeLists.txt
cmake_minimum_required(VERSION 4.0.0)
set(CMAKE_EXPERIMENTAL_CXX_IMPORT_STD "d0edc3af-4c50-42ea-a356-e2862fe7a444")
project(myproject VERSION 0.1.0 LANGUAGES CXX)
add_library(mylibrary SHARED)
target_compile_features(demolib PUBLIC cxx_std_23)
set_property(TARGET mylibrary PROPERTY CXX_MODULE_STD ON)
target_sources(mylibrary PUBLIC
FILE_SET my_module TYPE CXX_MODULES FILES
mymodule.cppm
)
add_executable(myprogram main.cpp)
target_link_libraries(myprogram PUBLIC mylibrary)
mymodule.cppm
import std;
export namespace mylibrary
{
void hello_word() {
std::println("Hello, world!");
}
}
main.cpp
import mymodule;
int main() {
mymodule::hello_word();
return 0;
}
Standard print library
What was once only available via fmt, the format string API is now available within the stardard library and fully supports std::ranges.
The strength of print strings is in inline formatting semantics, no more setting and clearing stream modes via std::setprecision and std::boolalpha when printing for humans eyes.
Why did C++ introduce stream operators to standard library in the first place? Well it was an event that happened 40 years ago where extensible, type-safe and polymorphic formatting could only be expressed through virtual calls. Language features and compilers have come a long way since then where
For writing custom print formatters, check out this C++ weekly episode.
Generators
Given the long standing prevalence of generators in Python and .NET via Generator and IEnumerable interfaces (respectively), it is a wonder how C++ has managed for so long without.
Prior to ranges C++20 and generators in C++23, one would have to define a generator via a class implementing the forward iterator concept.
/**
* Forwards iterable generator for fibonacci sequence:
* F(n) = F(n-1) + F(n-2), F_0 = 0, F_1 = 1
*/
class FibonacciGenerator {
int _max_n;
public:
class Iterator {
long long _current_fib;
long long _next_fib;
int _n;
public:
// Iterator traits (required for STL compatibility)
using difference_type = std::ptrdiff_t;
using value_type = long long;
using pointer = const value_type*;
using reference = const value_type&;
using iterator_category = std::input_iterator_tag;
// Constructor
Iterator(long long a = 0, long long b = 1, int count = 0)
: _current_fib(a), _next_fib(b), _n(count) {}
// Dereference operator
value_type operator*() const {
return _current_fib;
}
// Pre-increment operator
Iterator& operator++() {
long long temp = _current_fib;
_current_fib = _next_fib;
_next_fib += temp;
_n++;
return *this;
}
// Post-increment operator (optional, but good practice)
Iterator operator++(int) {
Iterator temp = *this;
++(*this);
return temp;
}
// Comparison operators
bool operator==(const Iterator& other) const {
return _n == other._n;
}
bool operator!=(const Iterator& other) const {
return !(*this == other);
}
};
Iterator begin() const { return Iterator(0, 1, 0); }
Iterator end() const { return Iterator(0, 0, _max_n); } // Sentinel for end
FibonacciGenerator(int max_n) : _max_n(max_n) {}
};
// Usage:
// for (auto value : FibonacciGenerator(10))
// {
// std::cout << value << " " << std::endl;
// }
// Output: 0 1 1 2 3 5 8 13 21 34
Using C++20 concepts, modules, print, constexpr math and C++23 generators:
export module fibonacci;
import std;
/**
* Range generator for fibonacci sequence:
* F(n) = F(n-1) + F(n-2), F_0 = 0, F_1 = 1
*/
export template<std::unsigned_integral T>
std::generator<T> fibonacci()
{
// Largest index of a fibonacci sequence number
// where n not greater than F is:
// n(F) := floor(log(sqrt(5)(F + 0.5)) / log(phi))
constexpr int n_max =
(
std::log2(std::numeric_limits<T>::max())
+ std::log2(std::sqrt(5.0))
)
/ std::log2(std::numbers::phi_v<double>);
T a=0, b=1;
co_yield a;
for (auto n : std::views::iota(0, n_max))
{
a = std::exchange(b, a + b);
co_yield a;
}
}
// Usage:
// std::println("{}", fibonacci<int>() | std::views::take(10));
// Output: [0, 1, 1, 2, 3, 5, 8, 13, 21, 34]
What a concise improvement! For a live demonstration of the above, check out the following Godbolt CMake Example
Extended floating-point types
In some fields such as AI and GPGPU there are needs to optimize memory size and transfer throughput for large amount of floating point data. 16-bit floats once requirid custom implementations, but are now officially defined in the standard in two forms:
-
std::float16 (half, FP16, binary-16) - The IEEE standard for a half precision float format, where the significand (10 bit) is much greater than the exponent (5 bits).
-
std::bfloat16 (brain floating point) - The google proposed half precision format, with exponent (8 bits) greater than significand (7 bits).
In rare science cases there is sometimes need to extend precision even further to quadruple precision:
-
std::float128 (long double, PF128, binary-128) - Previously avialable as
_Float128andlong double.
For a complete table of types, see https://en.cppreference.com/w/cpp/types/floating-point.html
Which binary16 to use?
Due to the power tower relation between floating point size and the exponent, the exponential range of float16 far less than float32, whereas bfloat16 aproximately preserves the range flot32.
floating-point maximum
$$FP_{max}(b_{e}, b_{s})=\dfrac{2^{2^{exp-1}}}{2}*(1+\dfrac{2^{sig}-1}{2^{sig}})$$ $$\tag{float32} FP_{max}(8, 23)=3.40×10^{38}$$ $$\tag{float16} FP_{max}(5, 10)=65504$$ $$\tag{bfloat32} FP_{max}(8, 7)=3.39×10^{38}$$
Where $b_{e}$ is the number of exponent bits, and $b_{s}$ is the number of significand bits.
floating-point precision
$$FP_{prec}(b_{s}) = \dfrac{1}{2^{sig}}$$ $$\tag{float32} FP_{prec}(23)=1.19×10^{-7} \therefore \text{6 decimals}$$ $$\tag{float16} FP_{prec}(10)=9.77^{-4} \therefore \text{3 decimals}$$ $$\tag{bfloat16} FP_{prec}(7)=7.81×10^{-3} \therefore \text{2 decimals}$$
Summary
- If at least 3-decimals of accuracy is of more importance, use
float16. - If preserving exponent range of
float32is of importance and 2-decimal of accuracy is acceptable, usebfloat16.
For more information see IEEE-754.
And more
For more information on modern C++23 features and compiler compatibility, check out the standard reference at: https://en.cppreference.com/w/cpp/23.html