diff --git a/doc/unordered/buckets.adoc b/doc/unordered/buckets.adoc index e8b5a62f..bef4de66 100644 --- a/doc/unordered/buckets.adoc +++ b/doc/unordered/buckets.adoc @@ -126,7 +126,7 @@ h|*Method* h|*Description* |`float max_load_factor(float z)` |Changes the container's maximum load factor, using `z` as a hint. + -**Open-addressing containers:** this function does nothing: users are not allowed to change the maximum load factor. +**Open-addressing and concurrent containers:** this function does nothing: users are not allowed to change the maximum load factor. |`void rehash(size_type n)` |Changes the number of buckets so that there at least `n` buckets, and so that the load factor is less than the maximum load factor. diff --git a/doc/unordered/changes.adoc b/doc/unordered/changes.adoc index 87667cc6..b0f03187 100644 --- a/doc/unordered/changes.adoc +++ b/doc/unordered/changes.adoc @@ -6,8 +6,9 @@ :github-pr-url: https://github.com/boostorg/unordered/pull :cpp: C++ -== Release 1.87.0 +== Release 1.87.0 - Major update +* Added concurrent, node-based containers `boost::concurrent_node_map` and `boost::concurrent_node_set`. * Made visitation exclusive-locked within certain `boost::concurrent_flat_set` operations to allow for safe mutable modification of elements ({github-pr-url}/265[PR#265^]). diff --git a/doc/unordered/compliance.adoc b/doc/unordered/compliance.adoc index a299465a..d532b246 100644 --- a/doc/unordered/compliance.adoc +++ b/doc/unordered/compliance.adoc @@ -89,7 +89,8 @@ The main differences with C++ unordered associative containers are: == Concurrent Containers There is currently no specification in the C++ standard for this or any other type of concurrent -data structure. The APIs of `boost::concurrent_flat_set` and `boost::concurrent_flat_map` +data structure. The APIs of `boost::concurrent_flat_set`/`boost::concurrent_node_set` and +`boost::concurrent_flat_map`/`boost::concurrent_node_map` are modelled after `std::unordered_flat_set` and `std::unordered_flat_map`, respectively, with the crucial difference that iterators are not provided due to their inherent problems in concurrent scenarios (high contention, prone to deadlocking): @@ -105,7 +106,7 @@ In a non-concurrent unordered container, iterators serve two main purposes: * Access to an element previously located via lookup. * Container traversal. -In place of iterators, `boost::concurrent_flat_set` and `boost::concurrent_flat_map` use _internal visitation_ +In place of iterators, Boost.Unordered concurrent containers use _internal visitation_ facilities as a thread-safe substitute. Classical operations returning an iterator to an element already existing in the container, like for instance: @@ -141,7 +142,8 @@ respectively, here visitation is granted mutable or const access depending on the constness of the member function used (there are also `*cvisit` overloads for explicit const visitation); In the case of `boost::concurrent_flat_set`, visitation is always const. -One notable operation not provided by `boost::concurrent_flat_map` is `operator[]`/`at`, which can be +One notable operation not provided by `boost::concurrent_flat_map`/`boost::concurrent_node_map` +is `operator[]`/`at`, which can be replaced, if in a more convoluted manner, by xref:#concurrent_flat_map_try_emplace_or_cvisit[`try_emplace_or_visit`]. diff --git a/doc/unordered/concurrent.adoc b/doc/unordered/concurrent.adoc index 266ca95e..a39b2c3e 100644 --- a/doc/unordered/concurrent.adoc +++ b/doc/unordered/concurrent.adoc @@ -3,7 +3,8 @@ :idprefix: concurrent_ -Boost.Unordered provides `boost::concurrent_flat_set` and `boost::concurrent_flat_map`, +Boost.Unordered provides `boost::concurrent_node_set`, `boost::concurrent_node_map`, +`boost::concurrent_flat_set` and `boost::concurrent_flat_map`, hash tables that allow concurrent write/read access from different threads without having to implement any synchronzation mechanism on the user's side. @@ -43,7 +44,7 @@ logical cores in the CPU). == Visitation-based API -The first thing a new user of `boost::concurrent_flat_set` or `boost::concurrent_flat_map` +The first thing a new user of Boost.Unordered concurrent containers will notice is that these classes _do not provide iterators_ (which makes them technically not https://en.cppreference.com/w/cpp/named_req/Container[Containers^] in the C++ standard sense). The reason for this is that iterators are inherently @@ -62,7 +63,7 @@ thread issues an `m.erase(k)` operation between A and B. There are designs that can remedy this by making iterators lock the element they point to, but this approach lends itself to high contention and can easily produce deadlocks in a program. `operator[]` has similar concurrency issues, and is not provided by -`boost::concurrent_flat_map` either. Instead, element access is done through +`boost::concurrent_flat_map`/`boost::concurrent_node_map` either. Instead, element access is done through so-called _visitation functions_: [source,c++] @@ -112,7 +113,7 @@ if (found) { } ---- -Visitation is prominent in the API provided by `boost::concurrent_flat_set` and `boost::concurrent_flat_map`, and +Visitation is prominent in the API provided by concurrent containers, and many classical operations have visitation-enabled variations: [source,c++] @@ -125,13 +126,15 @@ m.insert_or_visit(x, [](auto& y) { ---- Note that in this last example the visitation function could actually _modify_ -the element: as a general rule, operations on a `boost::concurrent_flat_map` `m` +the element: as a general rule, operations on a concurrent map `m` will grant visitation functions const/non-const access to the element depending on whether `m` is const/non-const. Const access can be always be explicitly requested by using `cvisit` overloads (for instance, `insert_or_cvisit`) and may result -in higher parallelization. For `boost::concurrent_flat_set`, on the other hand, +in higher parallelization. For concurrent sets, on the other hand, visitation is always const access. Consult the references of +xref:#concurrent_node_set[`boost::concurrent_node_set`], +xref:#concurrent_flat_map[`boost::concurrent_node_map`], xref:#concurrent_flat_set[`boost::concurrent_flat_set`] and xref:#concurrent_flat_map[`boost::concurrent_flat_map`] for the complete list of visitation-enabled operations. @@ -245,7 +248,7 @@ may yield worse performance. == Blocking Operations -``boost::concurrent_flat_set``s and ``boost::concurrent_flat_map``s can be copied, assigned, cleared and merged just like any +Concurrent containers can be copied, assigned, cleared and merged just like any other Boost.Unordered container. Unlike most other operations, these are _blocking_, that is, all other threads are prevented from accesing the tables involved while a copy, assignment, clear or merge operation is in progress. Blocking is taken care of automatically by the library @@ -258,9 +261,25 @@ reserving space in advance of bulk insertions will generally speed up the proces == Interoperability with non-concurrent containers As open-addressing and concurrent containers are based on the same internal data structure, -`boost::unordered_flat_set` and `boost::unordered_flat_map` can -be efficiently move-constructed from `boost::concurrent_flat_set` and `boost::concurrent_flat_map`, -respectively, and vice versa. +they can be efficiently move-constructed from their non-concurrent counterpart, and vice versa. + +[caption=, title='Table {counter:table-counter}. Concurrent/non-concurrent interoperatibility'] +[cols="1,1", frame=all, grid=all] +|=== +^|`boost::concurrent_node_set` +^|`boost::unordered_node_set` + +^|`boost::concurrent_node_map` +^|`boost::unordered_node_map` + +^|`boost::concurrent_flat_set` +^|`boost::unordered_flat_set` + +^|`boost::concurrent_flat_map` +^|`boost::unordered_flat_map` + +|=== + This interoperability comes handy in multistage scenarios where parts of the data processing happen in parallel whereas other steps are non-concurrent (or non-modifying). In the following example, we want to construct a histogram from a huge input vector of words: diff --git a/doc/unordered/concurrent_node_map.adoc b/doc/unordered/concurrent_node_map.adoc new file mode 100644 index 00000000..92829c74 --- /dev/null +++ b/doc/unordered/concurrent_node_map.adoc @@ -0,0 +1,1765 @@ +[#concurrent_node_map] +== Class Template concurrent_node_map + +:idprefix: concurrent_node_map_ + +`boost::concurrent_node_map` — A node-based hash table that associates unique keys with another value and +allows for concurrent element insertion, erasure, lookup and access +without external synchronization mechanisms. + +Even though it acts as a container, `boost::concurrent_node_map` +does not model the standard C++ https://en.cppreference.com/w/cpp/named_req/Container[Container^] concept. +In particular, iterators and associated operations (`begin`, `end`, etc.) are not provided. +Element access and modification are done through user-provided _visitation functions_ that are passed +to `concurrent_node_map` operations where they are executed internally in a controlled fashion. +Such visitation-based API allows for low-contention concurrent usage scenarios. + +The internal data structure of `boost::concurrent_node_map` is similar to that of +`boost::unordered_node_map`. Unlike `boost::concurrent_flat_map`, pointer stability and +node handling functionalities are provided, at the expense of potentially lower performance. + +=== Synopsis + +[listing,subs="+macros,+quotes"] +----- +// #include + +namespace boost { + template, + class Pred = std::equal_to, + class Allocator = std::allocator>> + class concurrent_node_map { + public: + // types + using key_type = Key; + using mapped_type = T; + using value_type = std::pair; + using init_type = std::pair< + typename std::remove_const::type, + typename std::remove_const::type + >; + using hasher = Hash; + using key_equal = Pred; + using allocator_type = Allocator; + using pointer = typename std::allocator_traits::pointer; + using const_pointer = typename std::allocator_traits::const_pointer; + using reference = value_type&; + using const_reference = const value_type&; + using size_type = std::size_t; + using difference_type = std::ptrdiff_t; + + using node_type = _implementation-defined_; + using insert_return_type = _implementation-defined_; + + using stats = xref:stats_stats_type[__stats-type__]; // if statistics are xref:concurrent_node_map_boost_unordered_enable_stats[enabled] + + // constants + static constexpr size_type xref:#concurrent_node_map_constants[bulk_visit_size] = _implementation-defined_; + + // construct/copy/destroy + xref:#concurrent_node_map_default_constructor[concurrent_node_map](); + explicit xref:#concurrent_node_map_bucket_count_constructor[concurrent_node_map](size_type n, + const hasher& hf = hasher(), + const key_equal& eql = key_equal(), + const allocator_type& a = allocator_type()); + template + xref:#concurrent_node_map_iterator_range_constructor[concurrent_node_map](InputIterator f, InputIterator l, + size_type n = _implementation-defined_, + const hasher& hf = hasher(), + const key_equal& eql = key_equal(), + const allocator_type& a = allocator_type()); + xref:#concurrent_node_map_copy_constructor[concurrent_node_map](const concurrent_node_map& other); + xref:#concurrent_node_map_move_constructor[concurrent_node_map](concurrent_node_map&& other); + template + xref:#concurrent_node_map_iterator_range_constructor_with_allocator[concurrent_node_map](InputIterator f, InputIterator l,const allocator_type& a); + explicit xref:#concurrent_node_map_allocator_constructor[concurrent_node_map](const Allocator& a); + xref:#concurrent_node_map_copy_constructor_with_allocator[concurrent_node_map](const concurrent_node_map& other, const Allocator& a); + xref:#concurrent_node_map_move_constructor_with_allocator[concurrent_node_map](concurrent_node_map&& other, const Allocator& a); + xref:#concurrent_node_map_move_constructor_from_unordered_node_map[concurrent_node_map](unordered_node_map&& other); + xref:#concurrent_node_map_initializer_list_constructor[concurrent_node_map](std::initializer_list il, + size_type n = _implementation-defined_ + const hasher& hf = hasher(), + const key_equal& eql = key_equal(), + const allocator_type& a = allocator_type()); + xref:#concurrent_node_map_bucket_count_constructor_with_allocator[concurrent_node_map](size_type n, const allocator_type& a); + xref:#concurrent_node_map_bucket_count_constructor_with_hasher_and_allocator[concurrent_node_map](size_type n, const hasher& hf, const allocator_type& a); + template + xref:#concurrent_node_map_iterator_range_constructor_with_bucket_count_and_allocator[concurrent_node_map](InputIterator f, InputIterator l, size_type n, + const allocator_type& a); + template + xref:#concurrent_node_map_iterator_range_constructor_with_bucket_count_and_hasher[concurrent_node_map](InputIterator f, InputIterator l, size_type n, const hasher& hf, + const allocator_type& a); + xref:#concurrent_node_map_initializer_list_constructor_with_allocator[concurrent_node_map](std::initializer_list il, const allocator_type& a); + xref:#concurrent_node_map_initializer_list_constructor_with_bucket_count_and_allocator[concurrent_node_map](std::initializer_list il, size_type n, + const allocator_type& a); + xref:#concurrent_node_map_initializer_list_constructor_with_bucket_count_and_hasher_and_allocator[concurrent_node_map](std::initializer_list il, size_type n, const hasher& hf, + const allocator_type& a); + xref:#concurrent_node_map_destructor[~concurrent_node_map](); + concurrent_node_map& xref:#concurrent_node_map_copy_assignment[operator++=++](const concurrent_node_map& other); + concurrent_node_map& xref:#concurrent_node_map_move_assignment[operator++=++](concurrent_node_map&& other) ++noexcept( + (boost::allocator_traits::is_always_equal::value || + boost::allocator_traits::propagate_on_container_move_assignment::value) && + std::is_same::value);++ + concurrent_node_map& xref:#concurrent_node_map_initializer_list_assignment[operator++=++](std::initializer_list); + allocator_type xref:#concurrent_node_map_get_allocator[get_allocator]() const noexcept; + + + // visitation + template size_t xref:#concurrent_node_map_cvisit[visit](const key_type& k, F f); + template size_t xref:#concurrent_node_map_cvisit[visit](const key_type& k, F f) const; + template size_t xref:#concurrent_node_map_cvisit[cvisit](const key_type& k, F f) const; + template size_t xref:#concurrent_node_map_cvisit[visit](const K& k, F f); + template size_t xref:#concurrent_node_map_cvisit[visit](const K& k, F f) const; + template size_t xref:#concurrent_node_map_cvisit[cvisit](const K& k, F f) const; + + template + size_t xref:concurrent_node_map_bulk_visit[visit](FwdIterator first, FwdIterator last, F f); + template + size_t xref:concurrent_node_map_bulk_visit[visit](FwdIterator first, FwdIterator last, F f) const; + template + size_t xref:concurrent_node_map_bulk_visit[cvisit](FwdIterator first, FwdIterator last, F f) const; + + template size_t xref:#concurrent_node_map_cvisit_all[visit_all](F f); + template size_t xref:#concurrent_node_map_cvisit_all[visit_all](F f) const; + template size_t xref:#concurrent_node_map_cvisit_all[cvisit_all](F f) const; + template + void xref:#concurrent_node_map_parallel_cvisit_all[visit_all](ExecutionPolicy&& policy, F f); + template + void xref:#concurrent_node_map_parallel_cvisit_all[visit_all](ExecutionPolicy&& policy, F f) const; + template + void xref:#concurrent_node_map_parallel_cvisit_all[cvisit_all](ExecutionPolicy&& policy, F f) const; + + template bool xref:#concurrent_node_map_cvisit_while[visit_while](F f); + template bool xref:#concurrent_node_map_cvisit_while[visit_while](F f) const; + template bool xref:#concurrent_node_map_cvisit_while[cvisit_while](F f) const; + template + bool xref:#concurrent_node_map_parallel_cvisit_while[visit_while](ExecutionPolicy&& policy, F f); + template + bool xref:#concurrent_node_map_parallel_cvisit_while[visit_while](ExecutionPolicy&& policy, F f) const; + template + bool xref:#concurrent_node_map_parallel_cvisit_while[cvisit_while](ExecutionPolicy&& policy, F f) const; + + // capacity + ++[[nodiscard]]++ bool xref:#concurrent_node_map_empty[empty]() const noexcept; + size_type xref:#concurrent_node_map_size[size]() const noexcept; + size_type xref:#concurrent_node_map_max_size[max_size]() const noexcept; + + // modifiers + template bool xref:#concurrent_node_map_emplace[emplace](Args&&... args); + bool xref:#concurrent_node_map_copy_insert[insert](const value_type& obj); + bool xref:#concurrent_node_map_copy_insert[insert](const init_type& obj); + bool xref:#concurrent_node_map_move_insert[insert](value_type&& obj); + bool xref:#concurrent_node_map_move_insert[insert](init_type&& obj); + template size_type xref:#concurrent_node_map_insert_iterator_range[insert](InputIterator first, InputIterator last); + size_type xref:#concurrent_node_map_insert_initializer_list[insert](std::initializer_list il); + insert_return_type xref:#concurrent_node_map_insert_node[insert](node_type&& nh); + + template bool xref:#concurrent_node_map_emplace_or_cvisit[emplace_or_visit](Args&&... args, F&& f); + template bool xref:#concurrent_node_map_emplace_or_cvisit[emplace_or_cvisit](Args&&... args, F&& f); + template bool xref:#concurrent_node_map_copy_insert_or_cvisit[insert_or_visit](const value_type& obj, F f); + template bool xref:#concurrent_node_map_copy_insert_or_cvisit[insert_or_cvisit](const value_type& obj, F f); + template bool xref:#concurrent_node_map_copy_insert_or_cvisit[insert_or_visit](const init_type& obj, F f); + template bool xref:#concurrent_node_map_copy_insert_or_cvisit[insert_or_cvisit](const init_type& obj, F f); + template bool xref:#concurrent_node_map_move_insert_or_cvisit[insert_or_visit](value_type&& obj, F f); + template bool xref:#concurrent_node_map_move_insert_or_cvisit[insert_or_cvisit](value_type&& obj, F f); + template bool xref:#concurrent_node_map_move_insert_or_cvisit[insert_or_visit](init_type&& obj, F f); + template bool xref:#concurrent_node_map_move_insert_or_cvisit[insert_or_cvisit](init_type&& obj, F f); + template + size_type xref:#concurrent_node_map_insert_iterator_range_or_visit[insert_or_visit](InputIterator first, InputIterator last, F f); + template + size_type xref:#concurrent_node_map_insert_iterator_range_or_visit[insert_or_cvisit](InputIterator first, InputIterator last, F f); + template size_type xref:#concurrent_node_map_insert_initializer_list_or_visit[insert_or_visit](std::initializer_list il, F f); + template size_type xref:#concurrent_node_map_insert_initializer_list_or_visit[insert_or_cvisit](std::initializer_list il, F f); + template insert_return_type xref:#concurrent_node_map_insert_node_or_visit[insert_or_visit](node_type&& nh, F f); + template insert_return_type xref:#concurrent_node_map_insert_node_or_visit[insert_or_cvisit](node_type&& nh, F f); + + template bool xref:#concurrent_node_map_try_emplace[try_emplace](const key_type& k, Args&&... args); + template bool xref:#concurrent_node_map_try_emplace[try_emplace](key_type&& k, Args&&... args); + template bool xref:#concurrent_node_map_try_emplace[try_emplace](K&& k, Args&&... args); + + template + bool xref:#concurrent_node_map_try_emplace_or_cvisit[try_emplace_or_visit](const key_type& k, Args&&... args, F&& f); + template + bool xref:#concurrent_node_map_try_emplace_or_cvisit[try_emplace_or_cvisit](const key_type& k, Args&&... args, F&& f); + template + bool xref:#concurrent_node_map_try_emplace_or_cvisit[try_emplace_or_visit](key_type&& k, Args&&... args, F&& f); + template + bool xref:#concurrent_node_map_try_emplace_or_cvisit[try_emplace_or_cvisit](key_type&& k, Args&&... args, F&& f); + template + bool xref:#concurrent_node_map_try_emplace_or_cvisit[try_emplace_or_visit](K&& k, Args&&... args, F&& f); + template + bool xref:#concurrent_node_map_try_emplace_or_cvisit[try_emplace_or_cvisit](K&& k, Args&&... args, F&& f); + + template bool xref:#concurrent_node_map_insert_or_assign[insert_or_assign](const key_type& k, M&& obj); + template bool xref:#concurrent_node_map_insert_or_assign[insert_or_assign](key_type&& k, M&& obj); + template bool xref:#concurrent_node_map_insert_or_assign[insert_or_assign](K&& k, M&& obj); + + size_type xref:#concurrent_node_map_erase[erase](const key_type& k); + template size_type xref:#concurrent_node_map_erase[erase](const K& k); + + template size_type xref:#concurrent_node_map_erase_if_by_key[erase_if](const key_type& k, F f); + template size_type xref:#concurrent_node_map_erase_if_by_key[erase_if](const K& k, F f); + template size_type xref:#concurrent_node_map_erase_if[erase_if](F f); + template void xref:#concurrent_node_map_parallel_erase_if[erase_if](ExecutionPolicy&& policy, F f); + + void xref:#concurrent_node_map_swap[swap](concurrent_node_map& other) + noexcept(boost::allocator_traits::is_always_equal::value || + boost::allocator_traits::propagate_on_container_swap::value); + + node_type xref:#concurrent_node_map_extract[extract](const key_type& k); + template node_type xref:#concurrent_node_map_extract[extract](const K& k); + + template node_type xref:#concurrent_node_map_extract_if[extract_if](const key_type& k, F f); + template node_type xref:#concurrent_node_map_extract[extract_if](const K& k, F f); + + void xref:#concurrent_node_map_clear[clear]() noexcept; + + template + size_type xref:#concurrent_node_map_merge[merge](concurrent_node_map& source); + template + size_type xref:#concurrent_node_map_merge[merge](concurrent_node_map&& source); + + // observers + hasher xref:#concurrent_node_map_hash_function[hash_function]() const; + key_equal xref:#concurrent_node_map_key_eq[key_eq]() const; + + // map operations + size_type xref:#concurrent_node_map_count[count](const key_type& k) const; + template + size_type xref:#concurrent_node_map_count[count](const K& k) const; + bool xref:#concurrent_node_map_contains[contains](const key_type& k) const; + template + bool xref:#concurrent_node_map_contains[contains](const K& k) const; + + // bucket interface + size_type xref:#concurrent_node_map_bucket_count[bucket_count]() const noexcept; + + // hash policy + float xref:#concurrent_node_map_load_factor[load_factor]() const noexcept; + float xref:#concurrent_node_map_max_load_factor[max_load_factor]() const noexcept; + void xref:#concurrent_node_map_set_max_load_factor[max_load_factor](float z); + size_type xref:#concurrent_node_map_max_load[max_load]() const noexcept; + void xref:#concurrent_node_map_rehash[rehash](size_type n); + void xref:#concurrent_node_map_reserve[reserve](size_type n); + + // statistics (if xref:concurrent_node_map_boost_unordered_enable_stats[enabled]) + stats xref:#concurrent_node_map_get_stats[get_stats]() const; + void xref:#concurrent_node_map_reset_stats[reset_stats]() noexcept; + }; + + // Deduction Guides + template>, + class Pred = std::equal_to>, + class Allocator = std::allocator>> + concurrent_node_map(InputIterator, InputIterator, typename xref:#concurrent_node_map_deduction_guides[__see below__]::size_type = xref:#concurrent_node_map_deduction_guides[__see below__], + Hash = Hash(), Pred = Pred(), Allocator = Allocator()) + -> concurrent_node_map, xref:#concurrent_node_map_iter_mapped_type[__iter-mapped-type__], Hash, + Pred, Allocator>; + + template, + class Pred = std::equal_to, + class Allocator = std::allocator>> + concurrent_node_map(std::initializer_list>, + typename xref:#concurrent_node_map_deduction_guides[__see below__]::size_type = xref:#concurrent_node_map_deduction_guides[__see below__], Hash = Hash(), + Pred = Pred(), Allocator = Allocator()) + -> concurrent_node_map; + + template + concurrent_node_map(InputIterator, InputIterator, typename xref:#concurrent_node_map_deduction_guides[__see below__]::size_type, Allocator) + -> concurrent_node_map, xref:#concurrent_node_map_iter_mapped_type[__iter-mapped-type__], + boost::hash>, + std::equal_to>, Allocator>; + + template + concurrent_node_map(InputIterator, InputIterator, Allocator) + -> concurrent_node_map, xref:#concurrent_node_map_iter_mapped_type[__iter-mapped-type__], + boost::hash>, + std::equal_to>, Allocator>; + + template + concurrent_node_map(InputIterator, InputIterator, typename xref:#concurrent_node_map_deduction_guides[__see below__]::size_type, Hash, + Allocator) + -> concurrent_node_map, xref:#concurrent_node_map_iter_mapped_type[__iter-mapped-type__], Hash, + std::equal_to>, Allocator>; + + template + concurrent_node_map(std::initializer_list>, typename xref:#concurrent_node_map_deduction_guides[__see below__]::size_type, + Allocator) + -> concurrent_node_map, std::equal_to, Allocator>; + + template + concurrent_node_map(std::initializer_list>, Allocator) + -> concurrent_node_map, std::equal_to, Allocator>; + + template + concurrent_node_map(std::initializer_list>, typename xref:#concurrent_node_map_deduction_guides[__see below__]::size_type, + Hash, Allocator) + -> concurrent_node_map, Allocator>; + + // Equality Comparisons + template + bool xref:#concurrent_node_map_operator[operator==](const concurrent_node_map& x, + const concurrent_node_map& y); + + template + bool xref:#concurrent_node_map_operator_2[operator!=](const concurrent_node_map& x, + const concurrent_node_map& y); + + // swap + template + void xref:#concurrent_node_map_swap_2[swap](concurrent_node_map& x, + concurrent_node_map& y) + noexcept(noexcept(x.swap(y))); + + // Erasure + template + typename concurrent_node_map::size_type + xref:#concurrent_node_map_erase_if_2[erase_if](concurrent_node_map& c, Predicate pred); + + // Pmr aliases (C++17 and up) + namespace unordered::pmr { + template, + class Pred = std::equal_to> + using concurrent_node_map = + boost::concurrent_node_map>>; + } +} +----- + +--- + +=== Description + +*Template Parameters* + +[cols="1,1"] +|=== + +|_Key_ +.2+|`std::pair` must be https://en.cppreference.com/w/cpp/named_req/EmplaceConstructible[EmplaceConstructible^] +into the table from any `std::pair` object convertible to it, and it also must be +https://en.cppreference.com/w/cpp/named_req/Erasable[Erasable^] from the table. + +|_T_ + +|_Hash_ +|A unary function object type that acts a hash function for a `Key`. It takes a single argument of type `Key` and returns a value of type `std::size_t`. + +|_Pred_ +|A binary function object that induces an equivalence relation on values of type `Key`. It takes two arguments of type `Key` and returns a value of type `bool`. + +|_Allocator_ +|An allocator whose value type is the same as the table's value type. +Allocators using https://en.cppreference.com/w/cpp/named_req/Allocator#Fancy_pointers[fancy pointers] are supported. + +|=== + +The element nodes of the table are held into an internal _bucket array_. An node is inserted into a bucket determined by +the hash code of its element, but if the bucket is already occupied (a _collision_), an available one in the vicinity of the +original position is used. + +The size of the bucket array can be automatically increased by a call to `insert`/`emplace`, or as a result of calling +`rehash`/`reserve`. The _load factor_ of the table (number of elements divided by number of buckets) is never +greater than `max_load_factor()`, except possibly for small sizes where the implementation may decide to +allow for higher loads. + +If `xref:hash_traits_hash_is_avalanching[hash_is_avalanching]::value` is `true`, the hash function +is used as-is; otherwise, a bit-mixing post-processing stage is added to increase the quality of hashing +at the expense of extra computational cost. + +--- + +=== Concurrency Requirements and Guarantees + +Concurrent invocations of `operator()` on the same const instance of `Hash` or `Pred` are required +to not introduce data races. For `Alloc` being either `Allocator` or any allocator type rebound +from `Allocator`, concurrent invocations of the following operations on the same instance `al` of `Alloc` +are required to not introduce data races: + +* Copy construction from `al` of an allocator rebound from `Alloc` +* `std::allocator_traits::allocate` +* `std::allocator_traits::deallocate` +* `std::allocator_traits::construct` +* `std::allocator_traits::destroy` + +In general, these requirements on `Hash`, `Pred` and `Allocator` are met if these types +are not stateful or if the operations only involve constant access to internal data members. + +With the exception of destruction, concurrent invocations of any operation on the same instance of a +`concurrent_node_map` do not introduce data races — that is, they are thread-safe. + +If an operation *op* is explicitly designated as _blocking on_ `x`, where `x` is an instance of a `boost::concurrent_node_map`, +prior blocking operations on `x` synchronize with *op*. So, blocking operations on the same +`concurrent_node_map` execute sequentially in a multithreaded scenario. + +An operation is said to be _blocking on rehashing of_ ``__x__`` if it blocks on `x` +only when an internal rehashing is issued. + +When executed internally by a `boost::concurrent_node_map`, the following operations by a +user-provided visitation function on the element passed do not introduce data races: + +* Read access to the element. +* Non-mutable modification of the element. +* Mutable modification of the element (if the container operation executing the visitation function is not const +and its name does not contain `cvisit`.) + +Any `boost::concurrent_node_map operation` that inserts or modifies an element `e` +synchronizes with the internal invocation of a visitation function on `e`. + +Visitation functions executed by a `boost::concurrent_node_map` `x` are not allowed to invoke any operation +on `x`; invoking operations on a different `boost::concurrent_node_map` instance `y` is allowed only +if concurrent outstanding operations on `y` do not access `x` directly or indirectly. + +--- + +=== Configuration Macros + +==== `BOOST_UNORDERED_DISABLE_REENTRANCY_CHECK` + +In debug builds (more precisely, when +link:../../../assert/doc/html/assert.html#boost_assert_is_void[`BOOST_ASSERT_IS_VOID`^] +is not defined), __container reentrancies__ (illegaly invoking an operation on `m` from within +a function visiting elements of `m`) are detected and signalled through `BOOST_ASSERT_MSG`. +When run-time speed is a concern, the feature can be disabled by globally defining +this macro. + +--- + +==== `BOOST_UNORDERED_ENABLE_STATS` + +Globally define this macro to enable xref:#stats[statistics calculation] for the table. Note +that this option decreases the overall performance of many operations. + +--- + +=== Typedefs + +[source,c++,subs=+quotes] +---- +typedef _implementation-defined_ node_type; +---- + +A class for holding extracted table elements, modelling +https://en.cppreference.com/w/cpp/container/node_handle[NodeHandle]. + +--- + +[source,c++,subs=+quotes] +---- +typedef _implementation-defined_ insert_return_type; +---- + +A specialization of an internal class template: + +[source,c++,subs=+quotes] +---- +template +struct _insert_return_type_ // name is exposition only +{ + bool inserted; + NodeType node; +}; +---- + +with `NodeType` = `node_type`. + +--- + +=== Constants + +```cpp +static constexpr size_type bulk_visit_size; +``` + +Chunk size internally used in xref:concurrent_node_map_bulk_visit[bulk visit] operations. + +--- + +=== Constructors + +==== Default Constructor +```c++ +concurrent_node_map(); +``` + +Constructs an empty table using `hasher()` as the hash function, +`key_equal()` as the key equality predicate and `allocator_type()` as the allocator. + +[horizontal] +Postconditions:;; `size() == 0` +Requires:;; If the defaults are used, `hasher`, `key_equal` and `allocator_type` need to be https://en.cppreference.com/w/cpp/named_req/DefaultConstructible[DefaultConstructible^]. + +--- + +==== Bucket Count Constructor +```c++ +explicit concurrent_node_map(size_type n, + const hasher& hf = hasher(), + const key_equal& eql = key_equal(), + const allocator_type& a = allocator_type()); +``` + +Constructs an empty table with at least `n` buckets, using `hf` as the hash +function, `eql` as the key equality predicate, and `a` as the allocator. + +[horizontal] +Postconditions:;; `size() == 0` +Requires:;; If the defaults are used, `hasher`, `key_equal` and `allocator_type` need to be https://en.cppreference.com/w/cpp/named_req/DefaultConstructible[DefaultConstructible^]. + +--- + +==== Iterator Range Constructor +[source,c++,subs="+quotes"] +---- +template + concurrent_node_map(InputIterator f, InputIterator l, + size_type n = _implementation-defined_, + const hasher& hf = hasher(), + const key_equal& eql = key_equal(), + const allocator_type& a = allocator_type()); +---- + +Constructs an empty table with at least `n` buckets, using `hf` as the hash function, `eql` as the key equality predicate and `a` as the allocator, and inserts the elements from `[f, l)` into it. + +[horizontal] +Requires:;; If the defaults are used, `hasher`, `key_equal` and `allocator_type` need to be https://en.cppreference.com/w/cpp/named_req/DefaultConstructible[DefaultConstructible^]. + +--- + +==== Copy Constructor +```c++ +concurrent_node_map(concurrent_node_map const& other); +``` + +The copy constructor. Copies the contained elements, hash function, predicate and allocator. + +If `Allocator::select_on_container_copy_construction` exists and has the right signature, the allocator will be constructed from its result. + +[horizontal] +Requires:;; `value_type` is copy constructible +Concurrency:;; Blocking on `other`. + +--- + +==== Move Constructor +```c++ +concurrent_node_map(concurrent_node_map&& other); +``` + +The move constructor. The internal bucket array of `other` is transferred directly to the new table. +The hash function, predicate and allocator are moved-constructed from `other`. +If statistics are xref:concurrent_node_map_boost_unordered_enable_stats[enabled], +transfers the internal statistical information from `other` and calls `other.reset_stats()`. + +[horizontal] +Concurrency:;; Blocking on `other`. + +--- + +==== Iterator Range Constructor with Allocator +```c++ +template + concurrent_node_map(InputIterator f, InputIterator l, const allocator_type& a); +``` + +Constructs an empty table using `a` as the allocator, with the default hash function and key equality predicate and inserts the elements from `[f, l)` into it. + +[horizontal] +Requires:;; `hasher`, `key_equal` need to be https://en.cppreference.com/w/cpp/named_req/DefaultConstructible[DefaultConstructible^]. + +--- + +==== Allocator Constructor +```c++ +explicit concurrent_node_map(Allocator const& a); +``` + +Constructs an empty table, using allocator `a`. + +--- + +==== Copy Constructor with Allocator +```c++ +concurrent_node_map(concurrent_node_map const& other, Allocator const& a); +``` + +Constructs a table, copying ``other``'s contained elements, hash function, and predicate, but using allocator `a`. + +[horizontal] +Concurrency:;; Blocking on `other`. + +--- + +==== Move Constructor with Allocator +```c++ +concurrent_node_map(concurrent_node_map&& other, Allocator const& a); +``` + +If `a == other.get_allocator()`, the elements of `other` are transferred directly to the new table; +otherwise, elements are moved-constructed from those of `other`. The hash function and predicate are moved-constructed +from `other`, and the allocator is copy-constructed from `a`. +If statistics are xref:concurrent_node_map_boost_unordered_enable_stats[enabled], +transfers the internal statistical information from `other` iff `a == other.get_allocator()`, +and always calls `other.reset_stats()`. + +[horizontal] +Concurrency:;; Blocking on `other`. + +--- + +==== Move Constructor from unordered_node_map + +```c++ +concurrent_node_map(unordered_node_map&& other); +``` + +Move construction from a xref:#unordered_node_map[`unordered_node_map`]. +The internal bucket array of `other` is transferred directly to the new container. +The hash function, predicate and allocator are moved-constructed from `other`. +If statistics are xref:concurrent_node_map_boost_unordered_enable_stats[enabled], +transfers the internal statistical information from `other` and calls `other.reset_stats()`. + +[horizontal] +Complexity:;; O(`bucket_count()`) + +--- + +==== Initializer List Constructor +[source,c++,subs="+quotes"] +---- +concurrent_node_map(std::initializer_list il, + size_type n = _implementation-defined_ + const hasher& hf = hasher(), + const key_equal& eql = key_equal(), + const allocator_type& a = allocator_type()); +---- + +Constructs an empty table with at least `n` buckets, using `hf` as the hash function, `eql` as the key equality predicate and `a`, and inserts the elements from `il` into it. + +[horizontal] +Requires:;; If the defaults are used, `hasher`, `key_equal` and `allocator_type` need to be https://en.cppreference.com/w/cpp/named_req/DefaultConstructible[DefaultConstructible^]. + +--- + +==== Bucket Count Constructor with Allocator +```c++ +concurrent_node_map(size_type n, allocator_type const& a); +``` + +Constructs an empty table with at least `n` buckets, using `hf` as the hash function, the default hash function and key equality predicate and `a` as the allocator. + +[horizontal] +Postconditions:;; `size() == 0` +Requires:;; `hasher` and `key_equal` need to be https://en.cppreference.com/w/cpp/named_req/DefaultConstructible[DefaultConstructible^]. + +--- + +==== Bucket Count Constructor with Hasher and Allocator +```c++ +concurrent_node_map(size_type n, hasher const& hf, allocator_type const& a); +``` + +Constructs an empty table with at least `n` buckets, using `hf` as the hash function, the default key equality predicate and `a` as the allocator. + +[horizontal] +Postconditions:;; `size() == 0` +Requires:;; `key_equal` needs to be https://en.cppreference.com/w/cpp/named_req/DefaultConstructible[DefaultConstructible^]. + +--- + +==== Iterator Range Constructor with Bucket Count and Allocator +[source,c++,subs="+quotes"] +---- +template + concurrent_node_map(InputIterator f, InputIterator l, size_type n, const allocator_type& a); +---- + +Constructs an empty table with at least `n` buckets, using `a` as the allocator and default hash function and key equality predicate, and inserts the elements from `[f, l)` into it. + +[horizontal] +Requires:;; `hasher`, `key_equal` need to be https://en.cppreference.com/w/cpp/named_req/DefaultConstructible[DefaultConstructible^]. + +--- + +==== Iterator Range Constructor with Bucket Count and Hasher +[source,c++,subs="+quotes"] +---- + template + concurrent_node_map(InputIterator f, InputIterator l, size_type n, const hasher& hf, + const allocator_type& a); +---- + +Constructs an empty table with at least `n` buckets, using `hf` as the hash function, `a` as the allocator, with the default key equality predicate, and inserts the elements from `[f, l)` into it. + +[horizontal] +Requires:;; `key_equal` needs to be https://en.cppreference.com/w/cpp/named_req/DefaultConstructible[DefaultConstructible^]. + +--- + +==== initializer_list Constructor with Allocator + +```c++ +concurrent_node_map(std::initializer_list il, const allocator_type& a); +``` + +Constructs an empty table using `a` and default hash function and key equality predicate, and inserts the elements from `il` into it. + +[horizontal] +Requires:;; `hasher` and `key_equal` need to be https://en.cppreference.com/w/cpp/named_req/DefaultConstructible[DefaultConstructible^]. + +--- + +==== initializer_list Constructor with Bucket Count and Allocator + +```c++ +concurrent_node_map(std::initializer_list il, size_type n, const allocator_type& a); +``` + +Constructs an empty table with at least `n` buckets, using `a` and default hash function and key equality predicate, and inserts the elements from `il` into it. + +[horizontal] +Requires:;; `hasher` and `key_equal` need to be https://en.cppreference.com/w/cpp/named_req/DefaultConstructible[DefaultConstructible^]. + +--- + +==== initializer_list Constructor with Bucket Count and Hasher and Allocator + +```c++ +concurrent_node_map(std::initializer_list il, size_type n, const hasher& hf, + const allocator_type& a); +``` + +Constructs an empty table with at least `n` buckets, using `hf` as the hash function, `a` as the allocator and default key equality predicate,and inserts the elements from `il` into it. + +[horizontal] +Requires:;; `key_equal` needs to be https://en.cppreference.com/w/cpp/named_req/DefaultConstructible[DefaultConstructible^]. + +--- + +=== Destructor + +```c++ +~concurrent_node_map(); +``` + +[horizontal] +Note:;; The destructor is applied to every element, and all memory is deallocated + +--- + +=== Assignment + +==== Copy Assignment + +```c++ +concurrent_node_map& operator=(concurrent_node_map const& other); +``` + +The assignment operator. Destroys previously existing elements, copy-assigns the hash function and predicate from `other`, +copy-assigns the allocator from `other` if `Alloc::propagate_on_container_copy_assignment` exists and `Alloc::propagate_on_container_copy_assignment::value` is `true`, +and finally inserts copies of the elements of `other`. + +[horizontal] +Requires:;; `value_type` is https://en.cppreference.com/w/cpp/named_req/CopyInsertable[CopyInsertable^] +Concurrency:;; Blocking on `*this` and `other`. + +--- + +==== Move Assignment +```c++ +concurrent_node_map& operator=(concurrent_node_map&& other) + noexcept((boost::allocator_traits::is_always_equal::value || + boost::allocator_traits::propagate_on_container_move_assignment::value) && + std::is_same::value); +``` +The move assignment operator. Destroys previously existing elements, swaps the hash function and predicate from `other`, +and move-assigns the allocator from `other` if `Alloc::propagate_on_container_move_assignment` exists and `Alloc::propagate_on_container_move_assignment::value` is `true`. +If at this point the allocator is equal to `other.get_allocator()`, the internal bucket array of `other` is transferred directly to `*this`; +otherwise, inserts move-constructed copies of the elements of `other`. +If statistics are xref:concurrent_node_map_boost_unordered_enable_stats[enabled], +transfers the internal statistical information from `other` iff the final allocator is equal to `other.get_allocator()`, +and always calls `other.reset_stats()`. + +[horizontal] +Concurrency:;; Blocking on `*this` and `other`. + +--- + +==== Initializer List Assignment +```c++ +concurrent_node_map& operator=(std::initializer_list il); +``` + +Assign from values in initializer list. All previously existing elements are destroyed. + +[horizontal] +Requires:;; `value_type` is https://en.cppreference.com/w/cpp/named_req/CopyInsertable[CopyInsertable^] +Concurrency:;; Blocking on `*this`. + +--- + +=== Visitation + +==== [c]visit + +```c++ +template size_t visit(const key_type& k, F f); +template size_t visit(const key_type& k, F f) const; +template size_t cvisit(const key_type& k, F f) const; +template size_t visit(const K& k, F f); +template size_t visit(const K& k, F f) const; +template size_t cvisit(const K& k, F f) const; +``` + +If an element `x` exists with key equivalent to `k`, invokes `f` with a reference to `x`. +Such reference is const iff `*this` is const. + +[horizontal] +Returns:;; The number of elements visited (0 or 1). +Notes:;; The `template` overloads only participate in overload resolution if `Hash::is_transparent` and `Pred::is_transparent` are valid member typedefs. The library assumes that `Hash` is callable with both `K` and `Key` and that `Pred` is transparent. This enables heterogeneous lookup which avoids the cost of instantiating an instance of the `Key` type. + +--- + +==== Bulk visit + +```c++ +template + size_t visit(FwdIterator first, FwdIterator last, F f); +template + size_t visit(FwdIterator first, FwdIterator last, F f) const; +template + size_t cvisit(FwdIterator first, FwdIterator last, F f) const; +``` + +For each element `k` in the range [`first`, `last`), +if there is an element `x` in the container with key equivalent to `k`, +invokes `f` with a reference to `x`. +Such reference is const iff `*this` is const. + +Although functionally equivalent to individually invoking +xref:concurrent_node_map_cvisit[`[c\]visit`] for each key, bulk visitation +performs generally faster due to internal streamlining optimizations. +It is advisable that `std::distance(first,last)` be at least +xref:#concurrent_node_map_constants[`bulk_visit_size`] to enjoy +a performance gain: beyond this size, performance is not expected +to increase further. + +[horizontal] +Requires:;; `FwdIterator` is a https://en.cppreference.com/w/cpp/named_req/ForwardIterator[LegacyForwardIterator^] +({cpp}11 to {cpp}17), +or satisfies https://en.cppreference.com/w/cpp/iterator/forward_iterator[std::forward_iterator^] ({cpp}20 and later). +For `K` = `std::iterator_traits::value_type`, either `K` is `key_type` or +else `Hash::is_transparent` and `Pred::is_transparent` are valid member typedefs. +In the latter case, the library assumes that `Hash` is callable with both `K` and `Key` and that `Pred` is transparent. +This enables heterogeneous lookup which avoids the cost of instantiating an instance of the `Key` type. +Returns:;; The number of elements visited. + +--- + +==== [c]visit_all + +```c++ +template size_t visit_all(F f); +template size_t visit_all(F f) const; +template size_t cvisit_all(F f) const; +``` + +Successively invokes `f` with references to each of the elements in the table. +Such references are const iff `*this` is const. + +[horizontal] +Returns:;; The number of elements visited. + +--- + +==== Parallel [c]visit_all + +```c++ +template void visit_all(ExecutionPolicy&& policy, F f); +template void visit_all(ExecutionPolicy&& policy, F f) const; +template void cvisit_all(ExecutionPolicy&& policy, F f) const; +``` + +Invokes `f` with references to each of the elements in the table. Such references are const iff `*this` is const. +Execution is parallelized according to the semantics of the execution policy specified. + +[horizontal] +Throws:;; Depending on the exception handling mechanism of the execution policy used, may call `std::terminate` if an exception is thrown within `f`. +Notes:;; Only available in compilers supporting C++17 parallel algorithms. + ++ +These overloads only participate in overload resolution if `std::is_execution_policy_v>` is `true`. + ++ +Unsequenced execution policies are not allowed. + +--- + +==== [c]visit_while + +```c++ +template bool visit_while(F f); +template bool visit_while(F f) const; +template bool cvisit_while(F f) const; +``` + +Successively invokes `f` with references to each of the elements in the table until `f` returns `false` +or all the elements are visited. +Such references to the elements are const iff `*this` is const. + +[horizontal] +Returns:;; `false` iff `f` ever returns `false`. + +--- + +==== Parallel [c]visit_while + +```c++ +template bool visit_while(ExecutionPolicy&& policy, F f); +template bool visit_while(ExecutionPolicy&& policy, F f) const; +template bool cvisit_while(ExecutionPolicy&& policy, F f) const; +``` + +Invokes `f` with references to each of the elements in the table until `f` returns `false` +or all the elements are visited. +Such references to the elements are const iff `*this` is const. +Execution is parallelized according to the semantics of the execution policy specified. + +[horizontal] +Returns:;; `false` iff `f` ever returns `false`. +Throws:;; Depending on the exception handling mechanism of the execution policy used, may call `std::terminate` if an exception is thrown within `f`. +Notes:;; Only available in compilers supporting C++17 parallel algorithms. + ++ +These overloads only participate in overload resolution if `std::is_execution_policy_v>` is `true`. + ++ +Unsequenced execution policies are not allowed. + ++ +Parallelization implies that execution does not necessary finish as soon as `f` returns `false`, and as a result +`f` may be invoked with further elements for which the return value is also `false`. + +--- + +=== Size and Capacity + +==== empty + +```c++ +[[nodiscard]] bool empty() const noexcept; +``` + +[horizontal] +Returns:;; `size() == 0` + +--- + +==== size + +```c++ +size_type size() const noexcept; +``` + +[horizontal] +Returns:;; The number of elements in the table. + +[horizontal] +Notes:;; In the presence of concurrent insertion operations, the value returned may not accurately reflect +the true size of the table right after execution. + +--- + +==== max_size + +```c++ +size_type max_size() const noexcept; +``` + +[horizontal] +Returns:;; `size()` of the largest possible table. + +--- + +=== Modifiers + +==== emplace +```c++ +template bool emplace(Args&&... args); +``` + +Inserts an object, constructed with the arguments `args`, in the table if and only if there is no element in the table with an equivalent key. + +[horizontal] +Requires:;; `value_type` is constructible from `args`. +Returns:;; `true` if an insert took place. +Concurrency:;; Blocking on rehashing of `*this`. +Notes:;; If `args...` is of the form `k,v`, it delays constructing the whole object until it is certain that an element should be inserted, using only the `k` argument to check. + +--- + +==== Copy Insert +```c++ +bool insert(const value_type& obj); +bool insert(const init_type& obj); +``` + +Inserts `obj` in the table if and only if there is no element in the table with an equivalent key. + +[horizontal] +Requires:;; `value_type` is https://en.cppreference.com/w/cpp/named_req/CopyInsertable[CopyInsertable^]. +Returns:;; `true` if an insert took place. + +Concurrency:;; Blocking on rehashing of `*this`. +Notes:;; A call of the form `insert(x)`, where `x` is equally convertible to both `const value_type&` and `const init_type&`, is not ambiguous and selects the `init_type` overload. + +--- + +==== Move Insert +```c++ +bool insert(value_type&& obj); +bool insert(init_type&& obj); +``` + +Inserts `obj` in the table if and only if there is no element in the table with an equivalent key. + +[horizontal] +Requires:;; `value_type` is https://en.cppreference.com/w/cpp/named_req/MoveInsertable[MoveInsertable^]. +Returns:;; `true` if an insert took place. +Concurrency:;; Blocking on rehashing of `*this`. +Notes:;; A call of the form `insert(x)`, where `x` is equally convertible to both `value_type&&` and `init_type&&`, is not ambiguous and selects the `init_type` overload. + +--- + +==== Insert Iterator Range +```c++ +template size_type insert(InputIterator first, InputIterator last); +``` + +Equivalent to +[listing,subs="+macros,+quotes"] +----- + while(first != last) this->xref:#concurrent_node_map_emplace[emplace](*first++); +----- + +[horizontal] +Returns:;; The number of elements inserted. + +--- + +==== Insert Initializer List +```c++ +size_type insert(std::initializer_list il); +``` + +Equivalent to +[listing,subs="+macros,+quotes"] +----- + this->xref:#concurrent_node_map_insert_iterator_range[insert](il.begin(), il.end()); +----- + +[horizontal] +Returns:;; The number of elements inserted. + +--- + +==== Insert Node +```c++ +insert_return_type insert(node_type&& nh); +``` + +If `nh` is not empty, inserts the associated element in the table if and only if there is no element in the table with a key equivalent to `nh.key()`. +`nh` is empty when the function returns. + +[horizontal] +Returns:;; An `insert_return_type` object constructed from `inserted` and `node`: + +* If `nh` is empty, `inserted` is `false` and `node` is empty. +* Otherwise if the insertion took place, `inserted` is true and `node` is empty. +* If the insertion failed, `inserted` is false and `node` has the previous value of `nh`. +Throws:;; If an exception is thrown by an operation other than a call to `hasher` the function has no effect. +Concurrency:;; Blocking on rehashing of `*this`. +Notes:;; Behavior is undefined if `nh` is not empty and the allocators of `nh` and the container are not equal. + +--- + +==== emplace_or_[c]visit +```c++ +template bool emplace_or_visit(Args&&... args, F&& f); +template bool emplace_or_cvisit(Args&&... args, F&& f); +``` + +Inserts an object, constructed with the arguments `args`, in the table if there is no element in the table with an equivalent key. +Otherwise, invokes `f` with a reference to the equivalent element; such reference is const iff `emplace_or_cvisit` is used. + +[horizontal] +Requires:;; `value_type` is constructible from `args`. +Returns:;; `true` if an insert took place. +Concurrency:;; Blocking on rehashing of `*this`. +Notes:;; The interface is exposition only, as C++ does not allow to declare a parameter `f` after a variadic parameter pack. + +--- + +==== Copy insert_or_[c]visit +```c++ +template bool insert_or_visit(const value_type& obj, F f); +template bool insert_or_cvisit(const value_type& obj, F f); +template bool insert_or_visit(const init_type& obj, F f); +template bool insert_or_cvisit(const init_type& obj, F f); +``` + +Inserts `obj` in the table if and only if there is no element in the table with an equivalent key. +Otherwise, invokes `f` with a reference to the equivalent element; such reference is const iff a `*_cvisit` overload is used. + +[horizontal] +Requires:;; `value_type` is https://en.cppreference.com/w/cpp/named_req/CopyInsertable[CopyInsertable^]. +Returns:;; `true` if an insert took place. + +Concurrency:;; Blocking on rehashing of `*this`. +Notes:;; In a call of the form `insert_or_[c]visit(obj, f)`, the overloads accepting a `const value_type&` argument participate in overload resolution +only if `std::remove_cv::type>::type` is `value_type`. + +--- + +==== Move insert_or_[c]visit +```c++ +template bool insert_or_visit(value_type&& obj, F f); +template bool insert_or_cvisit(value_type&& obj, F f); +template bool insert_or_visit(init_type&& obj, F f); +template bool insert_or_cvisit(init_type&& obj, F f); +``` + +Inserts `obj` in the table if and only if there is no element in the table with an equivalent key. +Otherwise, invokes `f` with a reference to the equivalent element; such reference is const iff a `*_cvisit` overload is used. + +[horizontal] +Requires:;; `value_type` is https://en.cppreference.com/w/cpp/named_req/MoveInsertable[MoveInsertable^]. +Returns:;; `true` if an insert took place. + +Concurrency:;; Blocking on rehashing of `*this`. +Notes:;; In a call of the form `insert_or_[c]visit(obj, f)`, the overloads accepting a `value_type&&` argument participate in overload resolution +only if `std::remove_reference::type` is `value_type`. + +--- + +==== Insert Iterator Range or Visit +```c++ +template + size_type insert_or_visit(InputIterator first, InputIterator last, F f); +template + size_type insert_or_cvisit(InputIterator first, InputIterator last, F f); +``` + +Equivalent to +[listing,subs="+macros,+quotes"] +----- + while(first != last) this->xref:#concurrent_node_map_emplace_or_cvisit[emplace_or_[c\]visit](*first++, f); +----- + +[horizontal] +Returns:;; The number of elements inserted. + +--- + +==== Insert Initializer List or Visit +```c++ +template size_type insert_or_visit(std::initializer_list il, F f); +template size_type insert_or_cvisit(std::initializer_list il, F f); +``` + +Equivalent to +[listing,subs="+macros,+quotes"] +----- + this->xref:#concurrent_node_map_insert_iterator_range_or_visit[insert_or[c\]visit](il.begin(), il.end(), f); +----- + +[horizontal] +Returns:;; The number of elements inserted. + +--- + +==== Insert Node or Visit +```c++ +template insert_return_type insert_or_visit(node_type&& nh, F f); +template insert_return_type insert_or_cvisit(node_type&& nh, F f); +``` + +If `nh` is empty, does nothing. +Otherwise, inserts the associated element in the table if and only if there is no element in the table with a key equivalent to `nh.key()`. +Otherwise, invokes `f` with a reference to the equivalent element; such reference is const iff `insert_or_cvisit` is used. + +[horizontal] +Returns:;; An `insert_return_type` object constructed from `inserted` and `node`: + +* If `nh` is empty, `inserted` is `false` and `node` is empty. +* Otherwise if the insertion took place, `inserted` is true and `node` is empty. +* If the insertion failed, `inserted` is false and `node` has the previous value of `nh`. +Throws:;; If an exception is thrown by an operation other than a call to `hasher` or call to `f`, the function has no effect. +Concurrency:;; Blocking on rehashing of `*this`. +Notes:;; Behavior is undefined if `nh` is not empty and the allocators of `nh` and the container are not equal. + +--- + +==== try_emplace +```c++ +template bool try_emplace(const key_type& k, Args&&... args); +template bool try_emplace(key_type&& k, Args&&... args); +template bool try_emplace(K&& k, Args&&... args); +``` + +Inserts an element constructed from `k` and `args` into the table if there is no existing element with key `k` contained within it. + +[horizontal] +Returns:;; `true` if an insert took place. + +Concurrency:;; Blocking on rehashing of `*this`. +Notes:;; This function is similiar to xref:#concurrent_node_map_emplace[emplace], with the difference that no `value_type` is constructed +if there is an element with an equivalent key; otherwise, the construction is of the form: + ++ +-- +```c++ +// first two overloads +value_type(std::piecewise_construct, + std::forward_as_tuple(std::forward(k)), + std::forward_as_tuple(std::forward(args)...)) + +// third overload +value_type(std::piecewise_construct, + std::forward_as_tuple(std::forward(k)), + std::forward_as_tuple(std::forward(args)...)) +``` + +unlike xref:#concurrent_node_map_emplace[emplace], which simply forwards all arguments to ``value_type``'s constructor. + +The `template` overload only participates in overload resolution if `Hash::is_transparent` and `Pred::is_transparent` are valid member typedefs. The library assumes that `Hash` is callable with both `K` and `Key` and that `Pred` is transparent. This enables heterogeneous lookup which avoids the cost of instantiating an instance of the `Key` type. + +-- + +--- + +==== try_emplace_or_[c]visit +```c++ +template + bool try_emplace_or_visit(const key_type& k, Args&&... args, F&& f); +template + bool try_emplace_or_cvisit(const key_type& k, Args&&... args, F&& f); +template + bool try_emplace_or_visit(key_type&& k, Args&&... args, F&& f); +template + bool try_emplace_or_cvisit(key_type&& k, Args&&... args, F&& f); +template + bool try_emplace_or_visit(K&& k, Args&&... args, F&& f); +template + bool try_emplace_or_cvisit(K&& k, Args&&... args, F&& f); +``` + +Inserts an element constructed from `k` and `args` into the table if there is no existing element with key `k` contained within it. +Otherwise, invokes `f` with a reference to the equivalent element; such reference is const iff a `*_cvisit` overload is used. + +[horizontal] +Returns:;; `true` if an insert took place. + +Concurrency:;; Blocking on rehashing of `*this`. +Notes:;; No `value_type` is constructed +if there is an element with an equivalent key; otherwise, the construction is of the form: + ++ +-- +```c++ +// first four overloads +value_type(std::piecewise_construct, + std::forward_as_tuple(std::forward(k)), + std::forward_as_tuple(std::forward(args)...)) + +// last two overloads +value_type(std::piecewise_construct, + std::forward_as_tuple(std::forward(k)), + std::forward_as_tuple(std::forward(args)...)) +``` + +The interface is exposition only, as C++ does not allow to declare a parameter `f` after a variadic parameter pack. + +The `template` overloads only participate in overload resolution if `Hash::is_transparent` and `Pred::is_transparent` are valid member typedefs. The library assumes that `Hash` is callable with both `K` and `Key` and that `Pred` is transparent. This enables heterogeneous lookup which avoids the cost of instantiating an instance of the `Key` type. + +-- + +--- + +==== insert_or_assign +```c++ +template bool insert_or_assign(const key_type& k, M&& obj); +template bool insert_or_assign(key_type&& k, M&& obj); +template bool insert_or_assign(K&& k, M&& obj); +``` + +Inserts a new element into the table or updates an existing one by assigning to the contained value. + +If there is an element with key `k`, then it is updated by assigning `std::forward(obj)`. + +If there is no such element, it is added to the table as: +```c++ +// first two overloads +value_type(std::piecewise_construct, + std::forward_as_tuple(std::forward(k)), + std::forward_as_tuple(std::forward(obj))) + +// third overload +value_type(std::piecewise_construct, + std::forward_as_tuple(std::forward(k)), + std::forward_as_tuple(std::forward(obj))) +``` + +[horizontal] +Returns:;; `true` if an insert took place. +Concurrency:;; Blocking on rehashing of `*this`. +Notes:;; The `template` only participates in overload resolution if `Hash::is_transparent` and `Pred::is_transparent` are valid member typedefs. The library assumes that `Hash` is callable with both `K` and `Key` and that `Pred` is transparent. This enables heterogeneous lookup which avoids the cost of instantiating an instance of the `Key` type. + +--- + +==== erase +```c++ +size_type erase(const key_type& k); +template size_type erase(const K& k); +``` + +Erases the element with key equivalent to `k` if it exists. + +[horizontal] +Returns:;; The number of elements erased (0 or 1). +Throws:;; Only throws an exception if it is thrown by `hasher` or `key_equal`. +Notes:;; The `template` overload only participates in overload resolution if `Hash::is_transparent` and `Pred::is_transparent` are valid member typedefs. The library assumes that `Hash` is callable with both `K` and `Key` and that `Pred` is transparent. This enables heterogeneous lookup which avoids the cost of instantiating an instance of the `Key` type. + +--- + +==== erase_if by Key +```c++ +template size_type erase_if(const key_type& k, F f); +template size_type erase_if(const K& k, F f); +``` + +Erases the element `x` with key equivalent to `k` if it exists and `f(x)` is `true`. + +[horizontal] +Returns:;; The number of elements erased (0 or 1). +Throws:;; Only throws an exception if it is thrown by `hasher`, `key_equal` or `f`. +Notes:;; The `template` overload only participates in overload resolution if `std::is_execution_policy_v>` is `false`. + ++ +The `template` overload only participates in overload resolution if `Hash::is_transparent` and `Pred::is_transparent` are valid member typedefs. The library assumes that `Hash` is callable with both `K` and `Key` and that `Pred` is transparent. This enables heterogeneous lookup which avoids the cost of instantiating an instance of the `Key` type. + +--- + +==== erase_if +```c++ +template size_type erase_if(F f); +``` + +Successively invokes `f` with references to each of the elements in the table, and erases those for which `f` returns `true`. + +[horizontal] +Returns:;; The number of elements erased. +Throws:;; Only throws an exception if it is thrown by `f`. + +--- + +==== Parallel erase_if +```c++ +template void erase_if(ExecutionPolicy&& policy, F f); +``` + +Invokes `f` with references to each of the elements in the table, and erases those for which `f` returns `true`. +Execution is parallelized according to the semantics of the execution policy specified. + +[horizontal] +Throws:;; Depending on the exception handling mechanism of the execution policy used, may call `std::terminate` if an exception is thrown within `f`. +Notes:;; Only available in compilers supporting C++17 parallel algorithms. + ++ +This overload only participates in overload resolution if `std::is_execution_policy_v>` is `true`. + ++ +Unsequenced execution policies are not allowed. + +--- + +==== swap +```c++ +void swap(concurrent_node_map& other) + noexcept(boost::allocator_traits::is_always_equal::value || + boost::allocator_traits::propagate_on_container_swap::value); +``` + +Swaps the contents of the table with the parameter. + +If `Allocator::propagate_on_container_swap` is declared and `Allocator::propagate_on_container_swap::value` is `true` then the tables' allocators are swapped. Otherwise, swapping with unequal allocators results in undefined behavior. + +[horizontal] +Throws:;; Nothing unless `key_equal` or `hasher` throw on swapping. +Concurrency:;; Blocking on `*this` and `other`. + +--- + +==== extract +```c++ +node_type extract(const key_type& k); +template node_type extract(K&& k); +``` + +Extracts the element with key equivalent to `k`, if it exists. + +[horizontal] +Returns:;; A `node_type` object holding the extracted element, or empty if no element was extracted. +Throws:;; Only throws an exception if it is thrown by `hasher` or `key_equal`. +Notes:;; The `template` overload only participates in overload resolution if `Hash::is_transparent` and `Pred::is_transparent` are valid member typedefs. The library assumes that `Hash` is callable with both `K` and `Key` and that `Pred` is transparent. This enables heterogeneous lookup which avoids the cost of instantiating an instance of the `Key` type. + +--- + +==== extract_if +```c++ +template node_type extract_if(const key_type& k, F f); +template node_type extract_if(K&& k, F f); +``` + +Extracts the element `x` with key equivalent to `k`, if it exists and `f(x)` is `true`. + +[horizontal] +Returns:;; A `node_type` object holding the extracted element, or empty if no element was extracted. +Throws:;; Only throws an exception if it is thrown by `hasher` or `key_equal` or `f`. +Notes:;; The `template` overload only participates in overload resolution if `Hash::is_transparent` and `Pred::is_transparent` are valid member typedefs. The library assumes that `Hash` is callable with both `K` and `Key` and that `Pred` is transparent. This enables heterogeneous lookup which avoids the cost of instantiating an instance of the `Key` type. + +--- + +==== clear +```c++ +void clear() noexcept; +``` + +Erases all elements in the table. + +[horizontal] +Postconditions:;; `size() == 0`, `max_load() >= max_load_factor() * bucket_count()` +Concurrency:;; Blocking on `*this`. + +--- + +==== merge +```c++ +template + size_type merge(concurrent_node_map& source); +template + size_type merge(concurrent_node_map&& source); +``` + +Move-inserts all the elements from `source` whose key is not already present in `*this`, and erases them from `source`. + +[horizontal] +Returns:;; The number of elements inserted. +Concurrency:;; Blocking on `*this` and `source`. + +--- + +=== Observers + +==== get_allocator +``` +allocator_type get_allocator() const noexcept; +``` + +[horizontal] +Returns:;; The table's allocator. + +--- + +==== hash_function +``` +hasher hash_function() const; +``` + +[horizontal] +Returns:;; The table's hash function. + +--- + +==== key_eq +``` +key_equal key_eq() const; +``` + +[horizontal] +Returns:;; The table's key equality predicate. + +--- + +=== Map Operations + +==== count +```c++ +size_type count(const key_type& k) const; +template + size_type count(const K& k) const; +``` + +[horizontal] +Returns:;; The number of elements with key equivalent to `k` (0 or 1). +Notes:;; The `template` overload only participates in overload resolution if `Hash::is_transparent` and `Pred::is_transparent` are valid member typedefs. The library assumes that `Hash` is callable with both `K` and `Key` and that `Pred` is transparent. This enables heterogeneous lookup which avoids the cost of instantiating an instance of the `Key` type. + ++ +In the presence of concurrent insertion operations, the value returned may not accurately reflect +the true state of the table right after execution. + +--- + +==== contains +```c++ +bool contains(const key_type& k) const; +template + bool contains(const K& k) const; +``` + +[horizontal] +Returns:;; A boolean indicating whether or not there is an element with key equal to `k` in the table. +Notes:;; The `template` overload only participates in overload resolution if `Hash::is_transparent` and `Pred::is_transparent` are valid member typedefs. The library assumes that `Hash` is callable with both `K` and `Key` and that `Pred` is transparent. This enables heterogeneous lookup which avoids the cost of instantiating an instance of the `Key` type. + ++ +In the presence of concurrent insertion operations, the value returned may not accurately reflect +the true state of the table right after execution. + +--- +=== Bucket Interface + +==== bucket_count +```c++ +size_type bucket_count() const noexcept; +``` + +[horizontal] +Returns:;; The size of the bucket array. + +--- + +=== Hash Policy + +==== load_factor +```c++ +float load_factor() const noexcept; +``` + +[horizontal] +Returns:;; `static_cast(size())/static_cast(bucket_count())`, or `0` if `bucket_count() == 0`. + +--- + +==== max_load_factor + +```c++ +float max_load_factor() const noexcept; +``` + +[horizontal] +Returns:;; Returns the table's maximum load factor. + +--- + +==== Set max_load_factor +```c++ +void max_load_factor(float z); +``` + +[horizontal] +Effects:;; Does nothing, as the user is not allowed to change this parameter. Kept for compatibility with `boost::unordered_map`. + +--- + + +==== max_load + +```c++ +size_type max_load() const noexcept; +``` + +[horizontal] +Returns:;; The maximum number of elements the table can hold without rehashing, assuming that no further elements will be erased. +Note:;; After construction, rehash or clearance, the table's maximum load is at least `max_load_factor() * bucket_count()`. +This number may decrease on erasure under high-load conditions. + ++ +In the presence of concurrent insertion operations, the value returned may not accurately reflect +the true state of the table right after execution. + +--- + +==== rehash +```c++ +void rehash(size_type n); +``` + +Changes if necessary the size of the bucket array so that there are at least `n` buckets, and so that the load factor is less than or equal to the maximum load factor. When applicable, this will either grow or shrink the `bucket_count()` associated with the table. + +When `size() == 0`, `rehash(0)` will deallocate the underlying buckets array. + +[horizontal] +Throws:;; The function has no effect if an exception is thrown, unless it is thrown by the table's hash function or comparison function. +Concurrency:;; Blocking on `*this`. +--- + +==== reserve +```c++ +void reserve(size_type n); +``` + +Equivalent to `a.rehash(ceil(n / a.max_load_factor()))`. + +Similar to `rehash`, this function can be used to grow or shrink the number of buckets in the table. + +[horizontal] +Throws:;; The function has no effect if an exception is thrown, unless it is thrown by the table's hash function or comparison function. +Concurrency:;; Blocking on `*this`. + +--- + +=== Statistics + +==== get_stats +```c++ +stats get_stats() const; +``` + +[horizontal] +Returns:;; A statistical description of the insertion and lookup operations performed by the table so far. +Notes:;; Only available if xref:stats[statistics calculation] is xref:concurrent_node_map_boost_unordered_enable_stats[enabled]. + +--- + +==== reset_stats +```c++ +void reset_stats() noexcept; +``` + +[horizontal] +Effects:;; Sets to zero the internal statistics kept by the table. +Notes:;; Only available if xref:stats[statistics calculation] is xref:concurrent_node_map_boost_unordered_enable_stats[enabled]. + +--- + +=== Deduction Guides +A deduction guide will not participate in overload resolution if any of the following are true: + + - It has an `InputIterator` template parameter and a type that does not qualify as an input iterator is deduced for that parameter. + - It has an `Allocator` template parameter and a type that does not qualify as an allocator is deduced for that parameter. + - It has a `Hash` template parameter and an integral type or a type that qualifies as an allocator is deduced for that parameter. + - It has a `Pred` template parameter and a type that qualifies as an allocator is deduced for that parameter. + +A `size_­type` parameter type in a deduction guide refers to the `size_­type` member type of the +table type deduced by the deduction guide. Its default value coincides with the default value +of the constructor selected. + +==== __iter-value-type__ +[listings,subs="+macros,+quotes"] +----- +template + using __iter-value-type__ = + typename std::iterator_traits::value_type; // exposition only +----- + +==== __iter-key-type__ +[listings,subs="+macros,+quotes"] +----- +template + using __iter-key-type__ = std::remove_const_t< + std::tuple_element_t<0, xref:#concurrent_map_iter_value_type[__iter-value-type__]>>; // exposition only +----- + +==== __iter-mapped-type__ +[listings,subs="+macros,+quotes"] +----- +template + using __iter-mapped-type__ = + std::tuple_element_t<1, xref:#concurrent_map_iter_value_type[__iter-value-type__]>; // exposition only +----- + +==== __iter-to-alloc-type__ +[listings,subs="+macros,+quotes"] +----- +template + using __iter-to-alloc-type__ = std::pair< + std::add_const_t>>, + std::tuple_element_t<1, xref:#concurrent_map_iter_value_type[__iter-value-type__]>>; // exposition only +----- + +=== Equality Comparisons + +==== operator== +```c++ +template + bool operator==(const concurrent_node_map& x, + const concurrent_node_map& y); +``` + +Returns `true` if `x.size() == y.size()` and for every element in `x`, there is an element in `y` with the same key, with an equal value (using `operator==` to compare the value types). + +[horizontal] +Concurrency:;; Blocking on `x` and `y`. +Notes:;; Behavior is undefined if the two tables don't have equivalent equality predicates. + +--- + +==== operator!= +```c++ +template + bool operator!=(const concurrent_node_map& x, + const concurrent_node_map& y); +``` + +Returns `false` if `x.size() == y.size()` and for every element in `x`, there is an element in `y` with the same key, with an equal value (using `operator==` to compare the value types). + +[horizontal] +Concurrency:;; Blocking on `x` and `y`. +Notes:;; Behavior is undefined if the two tables don't have equivalent equality predicates. + +--- + +=== Swap +```c++ +template + void swap(concurrent_node_map& x, + concurrent_node_map& y) + noexcept(noexcept(x.swap(y))); +``` + +Equivalent to +[listing,subs="+macros,+quotes"] +----- +x.xref:#concurrent_node_map_swap[swap](y); +----- + +--- + +=== erase_if +```c++ +template + typename concurrent_node_map::size_type + erase_if(concurrent_node_map& c, Predicate pred); +``` + +Equivalent to +[listing,subs="+macros,+quotes"] +----- +c.xref:#concurrent_node_map_erase_if[erase_if](pred); +----- + +=== Serialization + +``concurrent_node_map``s can be archived/retrieved by means of +link:../../../serialization/index.html[Boost.Serialization^] using the API provided +by this library. Both regular and XML archives are supported. + +==== Saving an concurrent_node_map to an archive + +Saves all the elements of a `concurrent_node_map` `x` to an archive (XML archive) `ar`. + +[horizontal] +Requires:;; `std::remove_const::type` and `std::remove_const::type` +are serializable (XML serializable), and they do support Boost.Serialization +`save_construct_data`/`load_construct_data` protocol (automatically suported by +https://en.cppreference.com/w/cpp/named_req/DefaultConstructible[DefaultConstructible^] +types). +Concurrency:;; Blocking on `x`. + +--- + +==== Loading an concurrent_node_map from an archive + +Deletes all preexisting elements of a `concurrent_node_map` `x` and inserts +from an archive (XML archive) `ar` restored copies of the elements of the +original `concurrent_node_map` `other` saved to the storage read by `ar`. + +[horizontal] +Requires:;; `x.key_equal()` is functionally equivalent to `other.key_equal()`. +Concurrency:;; Blocking on `x`. diff --git a/doc/unordered/concurrent_node_set.adoc b/doc/unordered/concurrent_node_set.adoc new file mode 100644 index 00000000..71389cc9 --- /dev/null +++ b/doc/unordered/concurrent_node_set.adoc @@ -0,0 +1,1599 @@ +[#concurrent_node_set] +== Class Template concurrent_node_set + +:idprefix: concurrent_node_set_ + +`boost::concurrent_node_set` — A node-based hash table that stores unique values and +allows for concurrent element insertion, erasure, lookup and access +without external synchronization mechanisms. + +Even though it acts as a container, `boost::concurrent_node_set` +does not model the standard C++ https://en.cppreference.com/w/cpp/named_req/Container[Container^] concept. +In particular, iterators and associated operations (`begin`, `end`, etc.) are not provided. +Element access is done through user-provided _visitation functions_ that are passed +to `concurrent_node_set` operations where they are executed internally in a controlled fashion. +Such visitation-based API allows for low-contention concurrent usage scenarios. + +The internal data structure of `boost::concurrent_node_set` is similar to that of +`boost::unordered_node_set`. Unlike `boost::concurrent_flat_set`, pointer stability and +node handling functionalities are provided, at the expense of potentially lower performance. + +=== Synopsis + +[listing,subs="+macros,+quotes"] +----- +// #include + +namespace boost { + template, + class Pred = std::equal_to, + class Allocator = std::allocator> + class concurrent_node_set { + public: + // types + using key_type = Key; + using value_type = Key; + using init_type = Key; + using hasher = Hash; + using key_equal = Pred; + using allocator_type = Allocator; + using pointer = typename std::allocator_traits::pointer; + using const_pointer = typename std::allocator_traits::const_pointer; + using reference = value_type&; + using const_reference = const value_type&; + using size_type = std::size_t; + using difference_type = std::ptrdiff_t; + + using node_type = _implementation-defined_; + using insert_return_type = _implementation-defined_; + + using stats = xref:stats_stats_type[__stats-type__]; // if statistics are xref:concurrent_node_set_boost_unordered_enable_stats[enabled] + + // constants + static constexpr size_type xref:#concurrent_node_set_constants[bulk_visit_size] = _implementation-defined_; + + // construct/copy/destroy + xref:#concurrent_node_set_default_constructor[concurrent_node_set](); + explicit xref:#concurrent_node_set_bucket_count_constructor[concurrent_node_set](size_type n, + const hasher& hf = hasher(), + const key_equal& eql = key_equal(), + const allocator_type& a = allocator_type()); + template + xref:#concurrent_node_set_iterator_range_constructor[concurrent_node_set](InputIterator f, InputIterator l, + size_type n = _implementation-defined_, + const hasher& hf = hasher(), + const key_equal& eql = key_equal(), + const allocator_type& a = allocator_type()); + xref:#concurrent_node_set_copy_constructor[concurrent_node_set](const concurrent_node_set& other); + xref:#concurrent_node_set_move_constructor[concurrent_node_set](concurrent_node_set&& other); + template + xref:#concurrent_node_set_iterator_range_constructor_with_allocator[concurrent_node_set](InputIterator f, InputIterator l,const allocator_type& a); + explicit xref:#concurrent_node_set_allocator_constructor[concurrent_node_set](const Allocator& a); + xref:#concurrent_node_set_copy_constructor_with_allocator[concurrent_node_set](const concurrent_node_set& other, const Allocator& a); + xref:#concurrent_node_set_move_constructor_with_allocator[concurrent_node_set](concurrent_node_set&& other, const Allocator& a); + xref:#concurrent_node_set_move_constructor_from_unordered_node_set[concurrent_node_set](unordered_node_set&& other); + xref:#concurrent_node_set_initializer_list_constructor[concurrent_node_set](std::initializer_list il, + size_type n = _implementation-defined_ + const hasher& hf = hasher(), + const key_equal& eql = key_equal(), + const allocator_type& a = allocator_type()); + xref:#concurrent_node_set_bucket_count_constructor_with_allocator[concurrent_node_set](size_type n, const allocator_type& a); + xref:#concurrent_node_set_bucket_count_constructor_with_hasher_and_allocator[concurrent_node_set](size_type n, const hasher& hf, const allocator_type& a); + template + xref:#concurrent_node_set_iterator_range_constructor_with_bucket_count_and_allocator[concurrent_node_set](InputIterator f, InputIterator l, size_type n, + const allocator_type& a); + template + xref:#concurrent_node_set_iterator_range_constructor_with_bucket_count_and_hasher[concurrent_node_set](InputIterator f, InputIterator l, size_type n, const hasher& hf, + const allocator_type& a); + xref:#concurrent_node_set_initializer_list_constructor_with_allocator[concurrent_node_set](std::initializer_list il, const allocator_type& a); + xref:#concurrent_node_set_initializer_list_constructor_with_bucket_count_and_allocator[concurrent_node_set](std::initializer_list il, size_type n, + const allocator_type& a); + xref:#concurrent_node_set_initializer_list_constructor_with_bucket_count_and_hasher_and_allocator[concurrent_node_set](std::initializer_list il, size_type n, const hasher& hf, + const allocator_type& a); + xref:#concurrent_node_set_destructor[~concurrent_node_set](); + concurrent_node_set& xref:#concurrent_node_set_copy_assignment[operator++=++](const concurrent_node_set& other); + concurrent_node_set& xref:#concurrent_node_set_move_assignment[operator++=++](concurrent_node_set&& other) + noexcept(boost::allocator_traits::is_always_equal::value || + boost::allocator_traits::propagate_on_container_move_assignment::value); + concurrent_node_set& xref:#concurrent_node_set_initializer_list_assignment[operator++=++](std::initializer_list); + allocator_type xref:#concurrent_node_set_get_allocator[get_allocator]() const noexcept; + + + // visitation + template size_t xref:#concurrent_node_set_cvisit[visit](const key_type& k, F f); + template size_t xref:#concurrent_node_set_cvisit[visit](const key_type& k, F f) const; + template size_t xref:#concurrent_node_set_cvisit[cvisit](const key_type& k, F f) const; + template size_t xref:#concurrent_node_set_cvisit[visit](const K& k, F f); + template size_t xref:#concurrent_node_set_cvisit[visit](const K& k, F f) const; + template size_t xref:#concurrent_node_set_cvisit[cvisit](const K& k, F f) const; + + template + size_t xref:concurrent_node_set_bulk_visit[visit](FwdIterator first, FwdIterator last, F f); + template + size_t xref:concurrent_node_set_bulk_visit[visit](FwdIterator first, FwdIterator last, F f) const; + template + size_t xref:concurrent_node_set_bulk_visit[cvisit](FwdIterator first, FwdIterator last, F f) const; + + template size_t xref:#concurrent_node_set_cvisit_all[visit_all](F f); + template size_t xref:#concurrent_node_set_cvisit_all[visit_all](F f) const; + template size_t xref:#concurrent_node_set_cvisit_all[cvisit_all](F f) const; + template + void xref:#concurrent_node_set_parallel_cvisit_all[visit_all](ExecutionPolicy&& policy, F f); + template + void xref:#concurrent_node_set_parallel_cvisit_all[visit_all](ExecutionPolicy&& policy, F f) const; + template + void xref:#concurrent_node_set_parallel_cvisit_all[cvisit_all](ExecutionPolicy&& policy, F f) const; + + template bool xref:#concurrent_node_set_cvisit_while[visit_while](F f); + template bool xref:#concurrent_node_set_cvisit_while[visit_while](F f) const; + template bool xref:#concurrent_node_set_cvisit_while[cvisit_while](F f) const; + template + bool xref:#concurrent_node_set_parallel_cvisit_while[visit_while](ExecutionPolicy&& policy, F f); + template + bool xref:#concurrent_node_set_parallel_cvisit_while[visit_while](ExecutionPolicy&& policy, F f) const; + template + bool xref:#concurrent_node_set_parallel_cvisit_while[cvisit_while](ExecutionPolicy&& policy, F f) const; + + // capacity + ++[[nodiscard]]++ bool xref:#concurrent_node_set_empty[empty]() const noexcept; + size_type xref:#concurrent_node_set_size[size]() const noexcept; + size_type xref:#concurrent_node_set_max_size[max_size]() const noexcept; + + // modifiers + template bool xref:#concurrent_node_set_emplace[emplace](Args&&... args); + bool xref:#concurrent_node_set_copy_insert[insert](const value_type& obj); + bool xref:#concurrent_node_set_move_insert[insert](value_type&& obj); + template bool xref:#concurrent_node_set_transparent_insert[insert](K&& k); + template size_type xref:#concurrent_node_set_insert_iterator_range[insert](InputIterator first, InputIterator last); + size_type xref:#concurrent_node_set_insert_initializer_list[insert](std::initializer_list il); + insert_return_type xref:#concurrent_node_set_insert_node[insert](node_type&& nh); + + template bool xref:#concurrent_node_set_emplace_or_cvisit[emplace_or_visit](Args&&... args, F&& f); + template bool xref:#concurrent_node_set_emplace_or_cvisit[emplace_or_cvisit](Args&&... args, F&& f); + template bool xref:#concurrent_node_set_copy_insert_or_cvisit[insert_or_visit](const value_type& obj, F f); + template bool xref:#concurrent_node_set_copy_insert_or_cvisit[insert_or_cvisit](const value_type& obj, F f); + template bool xref:#concurrent_node_set_move_insert_or_cvisit[insert_or_visit](value_type&& obj, F f); + template bool xref:#concurrent_node_set_move_insert_or_cvisit[insert_or_cvisit](value_type&& obj, F f); + template bool xref:#concurrent_node_set_transparent_insert_or_cvisit[insert_or_visit](K&& k, F f); + template bool xref:#concurrent_node_set_transparent_insert_or_cvisit[insert_or_cvisit](K&& k, F f); + template + size_type xref:#concurrent_node_set_insert_iterator_range_or_visit[insert_or_visit](InputIterator first, InputIterator last, F f); + template + size_type xref:#concurrent_node_set_insert_iterator_range_or_visit[insert_or_cvisit](InputIterator first, InputIterator last, F f); + template size_type xref:#concurrent_node_set_insert_initializer_list_or_visit[insert_or_visit](std::initializer_list il, F f); + template size_type xref:#concurrent_node_set_insert_initializer_list_or_visit[insert_or_cvisit](std::initializer_list il, F f); + template insert_return_type xref:#concurrent_node_set_insert_node_or_visit[insert_or_visit](node_type&& nh, F f); + template insert_return_type xref:#concurrent_node_set_insert_node_or_visit[insert_or_cvisit](node_type&& nh, F f); + + size_type xref:#concurrent_node_set_erase[erase](const key_type& k); + template size_type xref:#concurrent_node_set_erase[erase](const K& k); + + template size_type xref:#concurrent_node_set_erase_if_by_key[erase_if](const key_type& k, F f); + template size_type xref:#concurrent_node_set_erase_if_by_key[erase_if](const K& k, F f); + template size_type xref:#concurrent_node_set_erase_if[erase_if](F f); + template void xref:#concurrent_node_set_parallel_erase_if[erase_if](ExecutionPolicy&& policy, F f); + + void xref:#concurrent_node_set_swap[swap](concurrent_node_set& other) + noexcept(boost::allocator_traits::is_always_equal::value || + boost::allocator_traits::propagate_on_container_swap::value); + + node_type xref:#concurrent_node_set_extract[extract](const key_type& k); + template node_type xref:#concurrent_node_set_extract[extract](const K& k); + + template node_type xref:#concurrent_node_set_extract_if[extract_if](const key_type& k, F f); + template node_type xref:#concurrent_node_set_extract[extract_if](const K& k, F f); + + void xref:#concurrent_node_set_clear[clear]() noexcept; + + template + size_type xref:#concurrent_node_set_merge[merge](concurrent_node_set& source); + template + size_type xref:#concurrent_node_set_merge[merge](concurrent_node_set&& source); + + // observers + hasher xref:#concurrent_node_set_hash_function[hash_function]() const; + key_equal xref:#concurrent_node_set_key_eq[key_eq]() const; + + // set operations + size_type xref:#concurrent_node_set_count[count](const key_type& k) const; + template + size_type xref:#concurrent_node_set_count[count](const K& k) const; + bool xref:#concurrent_node_set_contains[contains](const key_type& k) const; + template + bool xref:#concurrent_node_set_contains[contains](const K& k) const; + + // bucket interface + size_type xref:#concurrent_node_set_bucket_count[bucket_count]() const noexcept; + + // hash policy + float xref:#concurrent_node_set_load_factor[load_factor]() const noexcept; + float xref:#concurrent_node_set_max_load_factor[max_load_factor]() const noexcept; + void xref:#concurrent_node_set_set_max_load_factor[max_load_factor](float z); + size_type xref:#concurrent_node_set_max_load[max_load]() const noexcept; + void xref:#concurrent_node_set_rehash[rehash](size_type n); + void xref:#concurrent_node_set_reserve[reserve](size_type n); + + // statistics (if xref:concurrent_node_set_boost_unordered_enable_stats[enabled]) + stats xref:#concurrent_node_set_get_stats[get_stats]() const; + void xref:#concurrent_node_set_reset_stats[reset_stats]() noexcept; + }; + + // Deduction Guides + template>, + class Pred = std::equal_to>, + class Allocator = std::allocator>> + concurrent_node_set(InputIterator, InputIterator, typename xref:#concurrent_node_set_deduction_guides[__see below__]::size_type = xref:#concurrent_node_set_deduction_guides[__see below__], + Hash = Hash(), Pred = Pred(), Allocator = Allocator()) + -> concurrent_node_set, Hash, Pred, Allocator>; + + template, class Pred = std::equal_to, + class Allocator = std::allocator> + concurrent_node_set(std::initializer_list, typename xref:#concurrent_node_set_deduction_guides[__see below__]::size_type = xref:#concurrent_node_set_deduction_guides[__see below__], + Hash = Hash(), Pred = Pred(), Allocator = Allocator()) + -> concurrent_node_set; + + template + concurrent_node_set(InputIterator, InputIterator, typename xref:#concurrent_node_set_deduction_guides[__see below__]::size_type, Allocator) + -> concurrent_node_set, + boost::hash>, + std::equal_to>, Allocator>; + + template + concurrent_node_set(InputIterator, InputIterator, Allocator) + -> concurrent_node_set, + boost::hash>, + std::equal_to>, Allocator>; + + template + concurrent_node_set(InputIterator, InputIterator, typename xref:#concurrent_node_set_deduction_guides[__see below__]::size_type, Hash, + Allocator) + -> concurrent_node_set, Hash, + std::equal_to>, Allocator>; + + template + concurrent_node_set(std::initializer_list, typename xref:#concurrent_node_set_deduction_guides[__see below__]::size_type, Allocator) + -> concurrent_node_set, std::equal_to, Allocator>; + + template + concurrent_node_set(std::initializer_list, Allocator) + -> concurrent_node_set, std::equal_to, Allocator>; + + template + concurrent_node_set(std::initializer_list, typename xref:#concurrent_node_set_deduction_guides[__see below__]::size_type, Hash, Allocator) + -> concurrent_node_set, Allocator>; + + // Equality Comparisons + template + bool xref:#concurrent_node_set_operator[operator==](const concurrent_node_set& x, + const concurrent_node_set& y); + + template + bool xref:#concurrent_node_set_operator_2[operator!=](const concurrent_node_set& x, + const concurrent_node_set& y); + + // swap + template + void xref:#concurrent_node_set_swap_2[swap](concurrent_node_set& x, + concurrent_node_set& y) + noexcept(noexcept(x.swap(y))); + + // Erasure + template + typename concurrent_node_set::size_type + xref:#concurrent_node_set_erase_if_2[erase_if](concurrent_node_set& c, Predicate pred); + + // Pmr aliases (C++17 and up) + namespace unordered::pmr { + template, + class Pred = std::equal_to> + using concurrent_node_set = + boost::concurrent_node_set>; + } +} +----- + +--- + +=== Description + +*Template Parameters* + +[cols="1,1"] +|=== + +|_Key_ +|`Key` must be https://en.cppreference.com/w/cpp/named_req/MoveInsertable[MoveInsertable^] into the container +and https://en.cppreference.com/w/cpp/named_req/Erasable[Erasable^] from the container. + +|_Hash_ +|A unary function object type that acts a hash function for a `Key`. It takes a single argument of type `Key` and returns a value of type `std::size_t`. + +|_Pred_ +|A binary function object that induces an equivalence relation on values of type `Key`. It takes two arguments of type `Key` and returns a value of type `bool`. + +|_Allocator_ +|An allocator whose value type is the same as the table's value type. +`std::allocator_traits::pointer` and `std::allocator_traits::const_pointer` +must be convertible to/from `value_type*` and `const value_type*`, respectively. + +|=== + +The element nodes of the table are held into an internal _bucket array_. An node is inserted into a bucket determined by +the hash code of its element, but if the bucket is already occupied (a _collision_), an available one in the vicinity of the +original position is used. + +The size of the bucket array can be automatically increased by a call to `insert`/`emplace`, or as a result of calling +`rehash`/`reserve`. The _load factor_ of the table (number of elements divided by number of buckets) is never +greater than `max_load_factor()`, except possibly for small sizes where the implementation may decide to +allow for higher loads. + +If `xref:hash_traits_hash_is_avalanching[hash_is_avalanching]::value` is `true`, the hash function +is used as-is; otherwise, a bit-mixing post-processing stage is added to increase the quality of hashing +at the expense of extra computational cost. + +--- + +=== Concurrency Requirements and Guarantees + +Concurrent invocations of `operator()` on the same const instance of `Hash` or `Pred` are required +to not introduce data races. For `Alloc` being either `Allocator` or any allocator type rebound +from `Allocator`, concurrent invocations of the following operations on the same instance `al` of `Alloc` +are required to not introduce data races: + +* Copy construction from `al` of an allocator rebound from `Alloc` +* `std::allocator_traits::allocate` +* `std::allocator_traits::deallocate` +* `std::allocator_traits::construct` +* `std::allocator_traits::destroy` + +In general, these requirements on `Hash`, `Pred` and `Allocator` are met if these types +are not stateful or if the operations only involve constant access to internal data members. + +With the exception of destruction, concurrent invocations of any operation on the same instance of a +`concurrent_node_set` do not introduce data races — that is, they are thread-safe. + +If an operation *op* is explicitly designated as _blocking on_ `x`, where `x` is an instance of a `boost::concurrent_node_set`, +prior blocking operations on `x` synchronize with *op*. So, blocking operations on the same +`concurrent_node_set` execute sequentially in a multithreaded scenario. + +An operation is said to be _blocking on rehashing of_ ``__x__`` if it blocks on `x` +only when an internal rehashing is issued. + +When executed internally by a `boost::concurrent_node_set`, the following operations by a +user-provided visitation function on the element passed do not introduce data races: + +* Read access to the element. +* Non-mutable modification of the element. +* Mutable modification of the element (if the container operation executing the visitation function is not const +and its name does not contain `cvisit`.) + +Any `boost::concurrent_node_set operation` that inserts or modifies an element `e` +synchronizes with the internal invocation of a visitation function on `e`. + +Visitation functions executed by a `boost::concurrent_node_set` `x` are not allowed to invoke any operation +on `x`; invoking operations on a different `boost::concurrent_node_set` instance `y` is allowed only +if concurrent outstanding operations on `y` do not access `x` directly or indirectly. + +--- + +=== Configuration Macros + +==== `BOOST_UNORDERED_DISABLE_REENTRANCY_CHECK` + +In debug builds (more precisely, when +link:../../../assert/doc/html/assert.html#boost_assert_is_void[`BOOST_ASSERT_IS_VOID`^] +is not defined), __container reentrancies__ (illegaly invoking an operation on `m` from within +a function visiting elements of `m`) are detected and signalled through `BOOST_ASSERT_MSG`. +When run-time speed is a concern, the feature can be disabled by globally defining +this macro. + +--- + +==== `BOOST_UNORDERED_ENABLE_STATS` + +Globally define this macro to enable xref:#stats[statistics calculation] for the table. Note +that this option decreases the overall performance of many operations. + +--- + +=== Typedefs + +[source,c++,subs=+quotes] +---- +typedef _implementation-defined_ node_type; +---- + +A class for holding extracted table elements, modelling +https://en.cppreference.com/w/cpp/container/node_handle[NodeHandle]. + +--- + +[source,c++,subs=+quotes] +---- +typedef _implementation-defined_ insert_return_type; +---- + +A specialization of an internal class template: + +[source,c++,subs=+quotes] +---- +template +struct _insert_return_type_ // name is exposition only +{ + bool inserted; + NodeType node; +}; +---- + +with `NodeType` = `node_type`. + +--- + +=== Constants + +```cpp +static constexpr size_type bulk_visit_size; +``` + +Chunk size internally used in xref:concurrent_node_set_bulk_visit[bulk visit] operations. + +=== Constructors + +==== Default Constructor +```c++ +concurrent_node_set(); +``` + +Constructs an empty table using `hasher()` as the hash function, +`key_equal()` as the key equality predicate and `allocator_type()` as the allocator. + +[horizontal] +Postconditions:;; `size() == 0` +Requires:;; If the defaults are used, `hasher`, `key_equal` and `allocator_type` need to be https://en.cppreference.com/w/cpp/named_req/DefaultConstructible[DefaultConstructible^]. + +--- + +==== Bucket Count Constructor +```c++ +explicit concurrent_node_set(size_type n, + const hasher& hf = hasher(), + const key_equal& eql = key_equal(), + const allocator_type& a = allocator_type()); +``` + +Constructs an empty table with at least `n` buckets, using `hf` as the hash +function, `eql` as the key equality predicate, and `a` as the allocator. + +[horizontal] +Postconditions:;; `size() == 0` +Requires:;; If the defaults are used, `hasher`, `key_equal` and `allocator_type` need to be https://en.cppreference.com/w/cpp/named_req/DefaultConstructible[DefaultConstructible^]. + +--- + +==== Iterator Range Constructor +[source,c++,subs="+quotes"] +---- +template + concurrent_node_set(InputIterator f, InputIterator l, + size_type n = _implementation-defined_, + const hasher& hf = hasher(), + const key_equal& eql = key_equal(), + const allocator_type& a = allocator_type()); +---- + +Constructs an empty table with at least `n` buckets, using `hf` as the hash function, `eql` as the key equality predicate and `a` as the allocator, and inserts the elements from `[f, l)` into it. + +[horizontal] +Requires:;; If the defaults are used, `hasher`, `key_equal` and `allocator_type` need to be https://en.cppreference.com/w/cpp/named_req/DefaultConstructible[DefaultConstructible^]. + +--- + +==== Copy Constructor +```c++ +concurrent_node_set(concurrent_node_set const& other); +``` + +The copy constructor. Copies the contained elements, hash function, predicate and allocator. + +If `Allocator::select_on_container_copy_construction` exists and has the right signature, the allocator will be constructed from its result. + +[horizontal] +Requires:;; `value_type` is copy constructible +Concurrency:;; Blocking on `other`. + +--- + +==== Move Constructor +```c++ +concurrent_node_set(concurrent_node_set&& other); +``` + +The move constructor. The internal bucket array of `other` is transferred directly to the new table. +The hash function, predicate and allocator are moved-constructed from `other`. +If statistics are xref:concurrent_node_set_boost_unordered_enable_stats[enabled], +transfers the internal statistical information from `other` and calls `other.reset_stats()`. + +[horizontal] +Concurrency:;; Blocking on `other`. + +--- + +==== Iterator Range Constructor with Allocator +```c++ +template + concurrent_node_set(InputIterator f, InputIterator l, const allocator_type& a); +``` + +Constructs an empty table using `a` as the allocator, with the default hash function and key equality predicate and inserts the elements from `[f, l)` into it. + +[horizontal] +Requires:;; `hasher`, `key_equal` need to be https://en.cppreference.com/w/cpp/named_req/DefaultConstructible[DefaultConstructible^]. + +--- + +==== Allocator Constructor +```c++ +explicit concurrent_node_set(Allocator const& a); +``` + +Constructs an empty table, using allocator `a`. + +--- + +==== Copy Constructor with Allocator +```c++ +concurrent_node_set(concurrent_node_set const& other, Allocator const& a); +``` + +Constructs a table, copying ``other``'s contained elements, hash function, and predicate, but using allocator `a`. + +[horizontal] +Concurrency:;; Blocking on `other`. + +--- + +==== Move Constructor with Allocator +```c++ +concurrent_node_set(concurrent_node_set&& other, Allocator const& a); +``` + +If `a == other.get_allocator()`, the elements of `other` are transferred directly to the new table; +otherwise, elements are moved-constructed from those of `other`. The hash function and predicate are moved-constructed +from `other`, and the allocator is copy-constructed from `a`. +If statistics are xref:concurrent_node_set_boost_unordered_enable_stats[enabled], +transfers the internal statistical information from `other` iff `a == other.get_allocator()`, +and always calls `other.reset_stats()`. + +[horizontal] +Concurrency:;; Blocking on `other`. + +--- + +==== Move Constructor from unordered_node_set + +```c++ +concurrent_node_set(unordered_node_set&& other); +``` + +Move construction from a xref:#unordered_node_set[`unordered_node_set`]. +The internal bucket array of `other` is transferred directly to the new container. +The hash function, predicate and allocator are moved-constructed from `other`. +If statistics are xref:concurrent_node_set_boost_unordered_enable_stats[enabled], +transfers the internal statistical information from `other` and calls `other.reset_stats()`. + +[horizontal] +Complexity:;; O(`bucket_count()`) + +--- + +==== Initializer List Constructor +[source,c++,subs="+quotes"] +---- +concurrent_node_set(std::initializer_list il, + size_type n = _implementation-defined_ + const hasher& hf = hasher(), + const key_equal& eql = key_equal(), + const allocator_type& a = allocator_type()); +---- + +Constructs an empty table with at least `n` buckets, using `hf` as the hash function, `eql` as the key equality predicate and `a`, and inserts the elements from `il` into it. + +[horizontal] +Requires:;; If the defaults are used, `hasher`, `key_equal` and `allocator_type` need to be https://en.cppreference.com/w/cpp/named_req/DefaultConstructible[DefaultConstructible^]. + +--- + +==== Bucket Count Constructor with Allocator +```c++ +concurrent_node_set(size_type n, allocator_type const& a); +``` + +Constructs an empty table with at least `n` buckets, using `hf` as the hash function, the default hash function and key equality predicate and `a` as the allocator. + +[horizontal] +Postconditions:;; `size() == 0` +Requires:;; `hasher` and `key_equal` need to be https://en.cppreference.com/w/cpp/named_req/DefaultConstructible[DefaultConstructible^]. + +--- + +==== Bucket Count Constructor with Hasher and Allocator +```c++ +concurrent_node_set(size_type n, hasher const& hf, allocator_type const& a); +``` + +Constructs an empty table with at least `n` buckets, using `hf` as the hash function, the default key equality predicate and `a` as the allocator. + +[horizontal] +Postconditions:;; `size() == 0` +Requires:;; `key_equal` needs to be https://en.cppreference.com/w/cpp/named_req/DefaultConstructible[DefaultConstructible^]. + +--- + +==== Iterator Range Constructor with Bucket Count and Allocator +[source,c++,subs="+quotes"] +---- +template + concurrent_node_set(InputIterator f, InputIterator l, size_type n, const allocator_type& a); +---- + +Constructs an empty table with at least `n` buckets, using `a` as the allocator and default hash function and key equality predicate, and inserts the elements from `[f, l)` into it. + +[horizontal] +Requires:;; `hasher`, `key_equal` need to be https://en.cppreference.com/w/cpp/named_req/DefaultConstructible[DefaultConstructible^]. + +--- + +==== Iterator Range Constructor with Bucket Count and Hasher +[source,c++,subs="+quotes"] +---- + template + concurrent_node_set(InputIterator f, InputIterator l, size_type n, const hasher& hf, + const allocator_type& a); +---- + +Constructs an empty table with at least `n` buckets, using `hf` as the hash function, `a` as the allocator, with the default key equality predicate, and inserts the elements from `[f, l)` into it. + +[horizontal] +Requires:;; `key_equal` needs to be https://en.cppreference.com/w/cpp/named_req/DefaultConstructible[DefaultConstructible^]. + +--- + +==== initializer_list Constructor with Allocator + +```c++ +concurrent_node_set(std::initializer_list il, const allocator_type& a); +``` + +Constructs an empty table using `a` and default hash function and key equality predicate, and inserts the elements from `il` into it. + +[horizontal] +Requires:;; `hasher` and `key_equal` need to be https://en.cppreference.com/w/cpp/named_req/DefaultConstructible[DefaultConstructible^]. + +--- + +==== initializer_list Constructor with Bucket Count and Allocator + +```c++ +concurrent_node_set(std::initializer_list il, size_type n, const allocator_type& a); +``` + +Constructs an empty table with at least `n` buckets, using `a` and default hash function and key equality predicate, and inserts the elements from `il` into it. + +[horizontal] +Requires:;; `hasher` and `key_equal` need to be https://en.cppreference.com/w/cpp/named_req/DefaultConstructible[DefaultConstructible^]. + +--- + +==== initializer_list Constructor with Bucket Count and Hasher and Allocator + +```c++ +concurrent_node_set(std::initializer_list il, size_type n, const hasher& hf, + const allocator_type& a); +``` + +Constructs an empty table with at least `n` buckets, using `hf` as the hash function, `a` as the allocator and default key equality predicate,and inserts the elements from `il` into it. + +[horizontal] +Requires:;; `key_equal` needs to be https://en.cppreference.com/w/cpp/named_req/DefaultConstructible[DefaultConstructible^]. + +--- + +=== Destructor + +```c++ +~concurrent_node_set(); +``` + +[horizontal] +Note:;; The destructor is applied to every element, and all memory is deallocated + +--- + +=== Assignment + +==== Copy Assignment + +```c++ +concurrent_node_set& operator=(concurrent_node_set const& other); +``` + +The assignment operator. Destroys previously existing elements, copy-assigns the hash function and predicate from `other`, +copy-assigns the allocator from `other` if `Alloc::propagate_on_container_copy_assignment` exists and `Alloc::propagate_on_container_copy_assignment::value` is `true`, +and finally inserts copies of the elements of `other`. + +[horizontal] +Requires:;; `value_type` is https://en.cppreference.com/w/cpp/named_req/CopyInsertable[CopyInsertable^] +Concurrency:;; Blocking on `*this` and `other`. + +--- + +==== Move Assignment +```c++ +concurrent_node_set& operator=(concurrent_node_set&& other) + noexcept(boost::allocator_traits::is_always_equal::value || + boost::allocator_traits::propagate_on_container_move_assignment::value); +``` +The move assignment operator. Destroys previously existing elements, swaps the hash function and predicate from `other`, +and move-assigns the allocator from `other` if `Alloc::propagate_on_container_move_assignment` exists and `Alloc::propagate_on_container_move_assignment::value` is `true`. +If at this point the allocator is equal to `other.get_allocator()`, the internal bucket array of `other` is transferred directly to `*this`; +otherwise, inserts move-constructed copies of the elements of `other`. +If statistics are xref:concurrent_node_set_boost_unordered_enable_stats[enabled], +transfers the internal statistical information from `other` iff the final allocator is equal to `other.get_allocator()`, +and always calls `other.reset_stats()`. + +[horizontal] +Concurrency:;; Blocking on `*this` and `other`. + +--- + +==== Initializer List Assignment +```c++ +concurrent_node_set& operator=(std::initializer_list il); +``` + +Assign from values in initializer list. All previously existing elements are destroyed. + +[horizontal] +Requires:;; `value_type` is https://en.cppreference.com/w/cpp/named_req/CopyInsertable[CopyInsertable^] +Concurrency:;; Blocking on `*this`. + +--- + +=== Visitation + +==== [c]visit + +```c++ +template size_t visit(const key_type& k, F f); +template size_t visit(const key_type& k, F f) const; +template size_t cvisit(const key_type& k, F f) const; +template size_t visit(const K& k, F f); +template size_t visit(const K& k, F f) const; +template size_t cvisit(const K& k, F f) const; +``` + +If an element `x` exists with key equivalent to `k`, invokes `f` with a const reference to `x`. + +[horizontal] +Returns:;; The number of elements visited (0 or 1). +Notes:;; The `template` overloads only participate in overload resolution if `Hash::is_transparent` and `Pred::is_transparent` are valid member typedefs. The library assumes that `Hash` is callable with both `K` and `Key` and that `Pred` is transparent. This enables heterogeneous lookup which avoids the cost of instantiating an instance of the `Key` type. + +--- + +==== Bulk visit + +```c++ +template + size_t visit(FwdIterator first, FwdIterator last, F f); +template + size_t visit(FwdIterator first, FwdIterator last, F f) const; +template + size_t cvisit(FwdIterator first, FwdIterator last, F f) const; +``` + +For each element `k` in the range [`first`, `last`), +if there is an element `x` in the container with key equivalent to `k`, +invokes `f` with a const reference to `x`. + +Although functionally equivalent to individually invoking +xref:concurrent_node_set_cvisit[`[c\]visit`] for each key, bulk visitation +performs generally faster due to internal streamlining optimizations. +It is advisable that `std::distance(first,last)` be at least +xref:#concurrent_node_set_constants[`bulk_visit_size`] to enjoy +a performance gain: beyond this size, performance is not expected +to increase further. + +[horizontal] +Requires:;; `FwdIterator` is a https://en.cppreference.com/w/cpp/named_req/ForwardIterator[LegacyForwardIterator^] +({cpp}11 to {cpp}17), +or satisfies https://en.cppreference.com/w/cpp/iterator/forward_iterator[std::forward_iterator^] ({cpp}20 and later). +For `K` = `std::iterator_traits::value_type`, either `K` is `key_type` or +else `Hash::is_transparent` and `Pred::is_transparent` are valid member typedefs. +In the latter case, the library assumes that `Hash` is callable with both `K` and `Key` and that `Pred` is transparent. +This enables heterogeneous lookup which avoids the cost of instantiating an instance of the `Key` type. +Returns:;; The number of elements visited. + +--- + +==== [c]visit_all + +```c++ +template size_t visit_all(F f); +template size_t visit_all(F f) const; +template size_t cvisit_all(F f) const; +``` + +Successively invokes `f` with const references to each of the elements in the table. + +[horizontal] +Returns:;; The number of elements visited. + +--- + +==== Parallel [c]visit_all + +```c++ +template void visit_all(ExecutionPolicy&& policy, F f); +template void visit_all(ExecutionPolicy&& policy, F f) const; +template void cvisit_all(ExecutionPolicy&& policy, F f) const; +``` + +Invokes `f` with const references to each of the elements in the table. +Execution is parallelized according to the semantics of the execution policy specified. + +[horizontal] +Throws:;; Depending on the exception handling mechanism of the execution policy used, may call `std::terminate` if an exception is thrown within `f`. +Notes:;; Only available in compilers supporting C++17 parallel algorithms. + ++ +These overloads only participate in overload resolution if `std::is_execution_policy_v>` is `true`. + ++ +Unsequenced execution policies are not allowed. + +--- + +==== [c]visit_while + +```c++ +template bool visit_while(F f); +template bool visit_while(F f) const; +template bool cvisit_while(F f) const; +``` + +Successively invokes `f` with const references to each of the elements in the table until `f` returns `false` +or all the elements are visited. + +[horizontal] +Returns:;; `false` iff `f` ever returns `false`. + +--- + +==== Parallel [c]visit_while + +```c++ +template bool visit_while(ExecutionPolicy&& policy, F f); +template bool visit_while(ExecutionPolicy&& policy, F f) const; +template bool cvisit_while(ExecutionPolicy&& policy, F f) const; +``` + +Invokes `f` with const references to each of the elements in the table until `f` returns `false` +or all the elements are visited. +Execution is parallelized according to the semantics of the execution policy specified. + +[horizontal] +Returns:;; `false` iff `f` ever returns `false`. +Throws:;; Depending on the exception handling mechanism of the execution policy used, may call `std::terminate` if an exception is thrown within `f`. +Notes:;; Only available in compilers supporting C++17 parallel algorithms. + ++ +These overloads only participate in overload resolution if `std::is_execution_policy_v>` is `true`. + ++ +Unsequenced execution policies are not allowed. + ++ +Parallelization implies that execution does not necessary finish as soon as `f` returns `false`, and as a result +`f` may be invoked with further elements for which the return value is also `false`. + +--- + +=== Size and Capacity + +==== empty + +```c++ +[[nodiscard]] bool empty() const noexcept; +``` + +[horizontal] +Returns:;; `size() == 0` + +--- + +==== size + +```c++ +size_type size() const noexcept; +``` + +[horizontal] +Returns:;; The number of elements in the table. + +[horizontal] +Notes:;; In the presence of concurrent insertion operations, the value returned may not accurately reflect +the true size of the table right after execution. + +--- + +==== max_size + +```c++ +size_type max_size() const noexcept; +``` + +[horizontal] +Returns:;; `size()` of the largest possible table. + +--- + +=== Modifiers + +==== emplace +```c++ +template bool emplace(Args&&... args); +``` + +Inserts an object, constructed with the arguments `args`, in the table if and only if there is no element in the table with an equivalent key. + +[horizontal] +Requires:;; `value_type` is constructible from `args`. +Returns:;; `true` if an insert took place. +Concurrency:;; Blocking on rehashing of `*this`. + +--- + +==== Copy Insert +```c++ +bool insert(const value_type& obj); +``` + +Inserts `obj` in the table if and only if there is no element in the table with an equivalent key. + +[horizontal] +Requires:;; `value_type` is https://en.cppreference.com/w/cpp/named_req/CopyInsertable[CopyInsertable^]. +Returns:;; `true` if an insert took place. + +Concurrency:;; Blocking on rehashing of `*this`. + +--- + +==== Move Insert +```c++ +bool insert(value_type&& obj); +``` + +Inserts `obj` in the table if and only if there is no element in the table with an equivalent key. + +[horizontal] +Requires:;; `value_type` is https://en.cppreference.com/w/cpp/named_req/MoveInsertable[MoveInsertable^]. +Returns:;; `true` if an insert took place. +Concurrency:;; Blocking on rehashing of `*this`. + +--- + +==== Transparent Insert +```c++ +template bool insert(K&& k); +``` + +Inserts an element constructed from `std::forward(k)` in the container if and only if there is no element in the container with an equivalent key. + +[horizontal] +Requires:;; `value_type` is https://en.cppreference.com/w/cpp/named_req/EmplaceConstructible[EmplaceConstructible^] from `k`. +Returns:;; `true` if an insert took place. +Concurrency:;; Blocking on rehashing of `*this`. +Notes:;; This overload only participates in overload resolution if `Hash::is_transparent` and `Pred::is_transparent` are valid member typedefs. The library assumes that `Hash` is callable with both `K` and `Key` and that `Pred` is transparent. This enables heterogeneous lookup which avoids the cost of instantiating an instance of the `Key` type. + +--- + +==== Insert Iterator Range +```c++ +template size_type insert(InputIterator first, InputIterator last); +``` + +Equivalent to +[listing,subs="+macros,+quotes"] +----- + while(first != last) this->xref:#concurrent_node_set_emplace[emplace](*first++); +----- + +[horizontal] +Returns:;; The number of elements inserted. + +--- + +==== Insert Initializer List +```c++ +size_type insert(std::initializer_list il); +``` + +Equivalent to +[listing,subs="+macros,+quotes"] +----- + this->xref:#concurrent_node_set_insert_iterator_range[insert](il.begin(), il.end()); +----- + +[horizontal] +Returns:;; The number of elements inserted. + +--- + +==== Insert Node +```c++ +insert_return_type insert(node_type&& nh); +``` + +If `nh` is not empty, inserts the associated element in the table if and only if there is no element in the table with a key equivalent to `nh.value()`. +`nh` is empty when the function returns. + +[horizontal] +Returns:;; An `insert_return_type` object constructed from `inserted` and `node`: + +* If `nh` is empty, `inserted` is `false` and `node` is empty. +* Otherwise if the insertion took place, `inserted` is true and `node` is empty. +* If the insertion failed, `inserted` is false and `node` has the previous value of `nh`. +Throws:;; If an exception is thrown by an operation other than a call to `hasher` the function has no effect. +Concurrency:;; Blocking on rehashing of `*this`. +Notes:;; Behavior is undefined if `nh` is not empty and the allocators of `nh` and the container are not equal. + +--- + +==== emplace_or_[c]visit +```c++ +template bool emplace_or_visit(Args&&... args, F&& f); +template bool emplace_or_cvisit(Args&&... args, F&& f); +``` + +Inserts an object, constructed with the arguments `args`, in the table if there is no element in the table with an equivalent key. +Otherwise, invokes `f` with a const reference to the equivalent element. + +[horizontal] +Requires:;; `value_type` is constructible from `args`. +Returns:;; `true` if an insert took place. +Concurrency:;; Blocking on rehashing of `*this`. +Notes:;; The interface is exposition only, as C++ does not allow to declare a parameter `f` after a variadic parameter pack. + +--- + +==== Copy insert_or_[c]visit +```c++ +template bool insert_or_visit(const value_type& obj, F f); +template bool insert_or_cvisit(const value_type& obj, F f); +``` + +Inserts `obj` in the table if and only if there is no element in the table with an equivalent key. +Otherwise, invokes `f` with a const reference to the equivalent element. + +[horizontal] +Requires:;; `value_type` is https://en.cppreference.com/w/cpp/named_req/CopyInsertable[CopyInsertable^]. +Returns:;; `true` if an insert took place. + +Concurrency:;; Blocking on rehashing of `*this`. + +--- + +==== Move insert_or_[c]visit +```c++ +template bool insert_or_visit(value_type&& obj, F f); +template bool insert_or_cvisit(value_type&& obj, F f); +``` + +Inserts `obj` in the table if and only if there is no element in the table with an equivalent key. +Otherwise, invokes `f` with a const reference to the equivalent element. + +[horizontal] +Requires:;; `value_type` is https://en.cppreference.com/w/cpp/named_req/MoveInsertable[MoveInsertable^]. +Returns:;; `true` if an insert took place. + +Concurrency:;; Blocking on rehashing of `*this`. + +--- + +==== Transparent insert_or_[c]visit +```c++ +template bool insert_or_visit(K&& k, F f); +template bool insert_or_cvisit(K&& k, F f); +``` + +Inserts an element constructed from `std::forward(k)` in the container if and only if there is no element in the container with an equivalent key. +Otherwise, invokes `f` with a const reference to the equivalent element. + +[horizontal] +Requires:;; `value_type` is https://en.cppreference.com/w/cpp/named_req/EmplaceConstructible[EmplaceConstructible^] from `k`. +Returns:;; `true` if an insert took place. +Concurrency:;; Blocking on rehashing of `*this`. +Notes:;; These overloads only participate in overload resolution if `Hash::is_transparent` and `Pred::is_transparent` are valid member typedefs. The library assumes that `Hash` is callable with both `K` and `Key` and that `Pred` is transparent. This enables heterogeneous lookup which avoids the cost of instantiating an instance of the `Key` type. + +--- + +==== Insert Iterator Range or Visit +```c++ +template + size_type insert_or_visit(InputIterator first, InputIterator last, F f); +template + size_type insert_or_cvisit(InputIterator first, InputIterator last, F f); +``` + +Equivalent to +[listing,subs="+macros,+quotes"] +----- + while(first != last) this->xref:#concurrent_node_set_emplace_or_cvisit[emplace_or_[c\]visit](*first++, f); +----- + +[horizontal] +Returns:;; The number of elements inserted. + +--- + +==== Insert Initializer List or Visit +```c++ +template size_type insert_or_visit(std::initializer_list il, F f); +template size_type insert_or_cvisit(std::initializer_list il, F f); +``` + +Equivalent to +[listing,subs="+macros,+quotes"] +----- + this->xref:#concurrent_node_set_insert_iterator_range_or_visit[insert_or[c\]visit](il.begin(), il.end(), f); +----- + +[horizontal] +Returns:;; The number of elements inserted. + +--- + +==== Insert Node or Visit +```c++ +template insert_return_type insert_or_visit(node_type&& nh, F f); +template insert_return_type insert_or_cvisit(node_type&& nh, F f); +``` + +If `nh` is empty, does nothing. +Otherwise, inserts the associated element in the table if and only if there is no element in the table with a key equivalent to `nh.value()`. +Otherwise, invokes `f` with a const reference to the equivalent element. + +[horizontal] +Returns:;; An `insert_return_type` object constructed from `inserted` and `node`: + +* If `nh` is empty, `inserted` is `false` and `node` is empty. +* Otherwise if the insertion took place, `inserted` is true and `node` is empty. +* If the insertion failed, `inserted` is false and `node` has the previous value of `nh`. +Throws:;; If an exception is thrown by an operation other than a call to `hasher` or call to `f`, the function has no effect. +Concurrency:;; Blocking on rehashing of `*this`. +Notes:;; Behavior is undefined if `nh` is not empty and the allocators of `nh` and the container are not equal. + +--- + +==== erase +```c++ +size_type erase(const key_type& k); +template size_type erase(const K& k); +``` + +Erases the element with key equivalent to `k` if it exists. + +[horizontal] +Returns:;; The number of elements erased (0 or 1). +Throws:;; Only throws an exception if it is thrown by `hasher` or `key_equal`. +Notes:;; The `template` overload only participates in overload resolution if `Hash::is_transparent` and `Pred::is_transparent` are valid member typedefs. The library assumes that `Hash` is callable with both `K` and `Key` and that `Pred` is transparent. This enables heterogeneous lookup which avoids the cost of instantiating an instance of the `Key` type. + +--- + +==== erase_if by Key +```c++ +template size_type erase_if(const key_type& k, F f); +template size_type erase_if(const K& k, F f); +``` + +Erases the element `x` with key equivalent to `k` if it exists and `f(x)` is `true`. + +[horizontal] +Returns:;; The number of elements erased (0 or 1). +Throws:;; Only throws an exception if it is thrown by `hasher`, `key_equal` or `f`. +Notes:;; The `template` overload only participates in overload resolution if `std::is_execution_policy_v>` is `false`. + ++ +The `template` overload only participates in overload resolution if `Hash::is_transparent` and `Pred::is_transparent` are valid member typedefs. The library assumes that `Hash` is callable with both `K` and `Key` and that `Pred` is transparent. This enables heterogeneous lookup which avoids the cost of instantiating an instance of the `Key` type. + +--- + +==== erase_if +```c++ +template size_type erase_if(F f); +``` + +Successively invokes `f` with references to each of the elements in the table, and erases those for which `f` returns `true`. + +[horizontal] +Returns:;; The number of elements erased. +Throws:;; Only throws an exception if it is thrown by `f`. + +--- + +==== Parallel erase_if +```c++ +template void erase_if(ExecutionPolicy&& policy, F f); +``` + +Invokes `f` with references to each of the elements in the table, and erases those for which `f` returns `true`. +Execution is parallelized according to the semantics of the execution policy specified. + +[horizontal] +Throws:;; Depending on the exception handling mechanism of the execution policy used, may call `std::terminate` if an exception is thrown within `f`. +Notes:;; Only available in compilers supporting C++17 parallel algorithms. + ++ +This overload only participates in overload resolution if `std::is_execution_policy_v>` is `true`. + ++ +Unsequenced execution policies are not allowed. + +--- + +==== swap +```c++ +void swap(concurrent_node_set& other) + noexcept(boost::allocator_traits::is_always_equal::value || + boost::allocator_traits::propagate_on_container_swap::value); +``` + +Swaps the contents of the table with the parameter. + +If `Allocator::propagate_on_container_swap` is declared and `Allocator::propagate_on_container_swap::value` is `true` then the tables' allocators are swapped. Otherwise, swapping with unequal allocators results in undefined behavior. + +[horizontal] +Throws:;; Nothing unless `key_equal` or `hasher` throw on swapping. +Concurrency:;; Blocking on `*this` and `other`. + +--- + +==== extract +```c++ +node_type extract(const key_type& k); +template node_type extract(K&& k); +``` + +Extracts the element with key equivalent to `k`, if it exists. + +[horizontal] +Returns:;; A `node_type` object holding the extracted element, or empty if no element was extracted. +Throws:;; Only throws an exception if it is thrown by `hasher` or `key_equal`. +Notes:;; The `template` overload only participates in overload resolution if `Hash::is_transparent` and `Pred::is_transparent` are valid member typedefs. The library assumes that `Hash` is callable with both `K` and `Key` and that `Pred` is transparent. This enables heterogeneous lookup which avoids the cost of instantiating an instance of the `Key` type. + +--- + +==== extract_if +```c++ +template node_type extract_if(const key_type& k, F f); +template node_type extract_if(K&& k, F f); +``` + +Extracts the element `x` with key equivalent to `k`, if it exists and `f(x)` is `true`. + +[horizontal] +Returns:;; A `node_type` object holding the extracted element, or empty if no element was extracted. +Throws:;; Only throws an exception if it is thrown by `hasher` or `key_equal` or `f`. +Notes:;; The `template` overload only participates in overload resolution if `Hash::is_transparent` and `Pred::is_transparent` are valid member typedefs. The library assumes that `Hash` is callable with both `K` and `Key` and that `Pred` is transparent. This enables heterogeneous lookup which avoids the cost of instantiating an instance of the `Key` type. + +--- + +==== clear +```c++ +void clear() noexcept; +``` + +Erases all elements in the table. + +[horizontal] +Postconditions:;; `size() == 0`, `max_load() >= max_load_factor() * bucket_count()` +Concurrency:;; Blocking on `*this`. + +--- + +==== merge +```c++ +template + size_type merge(concurrent_node_set& source); +template + size_type merge(concurrent_node_set&& source); +``` + +Move-inserts all the elements from `source` whose key is not already present in `*this`, and erases them from `source`. + +[horizontal] +Returns:;; The number of elements inserted. +Concurrency:;; Blocking on `*this` and `source`. + +--- + +=== Observers + +==== get_allocator +``` +allocator_type get_allocator() const noexcept; +``` + +[horizontal] +Returns:;; The table's allocator. + +--- + +==== hash_function +``` +hasher hash_function() const; +``` + +[horizontal] +Returns:;; The table's hash function. + +--- + +==== key_eq +``` +key_equal key_eq() const; +``` + +[horizontal] +Returns:;; The table's key equality predicate. + +--- + +=== Set Operations + +==== count +```c++ +size_type count(const key_type& k) const; +template + size_type count(const K& k) const; +``` + +[horizontal] +Returns:;; The number of elements with key equivalent to `k` (0 or 1). +Notes:;; The `template` overload only participates in overload resolution if `Hash::is_transparent` and `Pred::is_transparent` are valid member typedefs. The library assumes that `Hash` is callable with both `K` and `Key` and that `Pred` is transparent. This enables heterogeneous lookup which avoids the cost of instantiating an instance of the `Key` type. + ++ +In the presence of concurrent insertion operations, the value returned may not accurately reflect +the true state of the table right after execution. + +--- + +==== contains +```c++ +bool contains(const key_type& k) const; +template + bool contains(const K& k) const; +``` + +[horizontal] +Returns:;; A boolean indicating whether or not there is an element with key equal to `k` in the table. +Notes:;; The `template` overload only participates in overload resolution if `Hash::is_transparent` and `Pred::is_transparent` are valid member typedefs. The library assumes that `Hash` is callable with both `K` and `Key` and that `Pred` is transparent. This enables heterogeneous lookup which avoids the cost of instantiating an instance of the `Key` type. + ++ +In the presence of concurrent insertion operations, the value returned may not accurately reflect +the true state of the table right after execution. + +--- +=== Bucket Interface + +==== bucket_count +```c++ +size_type bucket_count() const noexcept; +``` + +[horizontal] +Returns:;; The size of the bucket array. + +--- + +=== Hash Policy + +==== load_factor +```c++ +float load_factor() const noexcept; +``` + +[horizontal] +Returns:;; `static_cast(size())/static_cast(bucket_count())`, or `0` if `bucket_count() == 0`. + +--- + +==== max_load_factor + +```c++ +float max_load_factor() const noexcept; +``` + +[horizontal] +Returns:;; Returns the table's maximum load factor. + +--- + +==== Set max_load_factor +```c++ +void max_load_factor(float z); +``` + +[horizontal] +Effects:;; Does nothing, as the user is not allowed to change this parameter. Kept for compatibility with `boost::unordered_set`. + +--- + + +==== max_load + +```c++ +size_type max_load() const noexcept; +``` + +[horizontal] +Returns:;; The maximum number of elements the table can hold without rehashing, assuming that no further elements will be erased. +Note:;; After construction, rehash or clearance, the table's maximum load is at least `max_load_factor() * bucket_count()`. +This number may decrease on erasure under high-load conditions. + ++ +In the presence of concurrent insertion operations, the value returned may not accurately reflect +the true state of the table right after execution. + +--- + +==== rehash +```c++ +void rehash(size_type n); +``` + +Changes if necessary the size of the bucket array so that there are at least `n` buckets, and so that the load factor is less than or equal to the maximum load factor. When applicable, this will either grow or shrink the `bucket_count()` associated with the table. + +When `size() == 0`, `rehash(0)` will deallocate the underlying buckets array. + +[horizontal] +Throws:;; The function has no effect if an exception is thrown, unless it is thrown by the table's hash function or comparison function. +Concurrency:;; Blocking on `*this`. +--- + +==== reserve +```c++ +void reserve(size_type n); +``` + +Equivalent to `a.rehash(ceil(n / a.max_load_factor()))`. + +Similar to `rehash`, this function can be used to grow or shrink the number of buckets in the table. + +[horizontal] +Throws:;; The function has no effect if an exception is thrown, unless it is thrown by the table's hash function or comparison function. +Concurrency:;; Blocking on `*this`. + +--- + +=== Statistics + +==== get_stats +```c++ +stats get_stats() const; +``` + +[horizontal] +Returns:;; A statistical description of the insertion and lookup operations performed by the table so far. +Notes:;; Only available if xref:stats[statistics calculation] is xref:concurrent_node_set_boost_unordered_enable_stats[enabled]. + +--- + +==== reset_stats +```c++ +void reset_stats() noexcept; +``` + +[horizontal] +Effects:;; Sets to zero the internal statistics kept by the table. +Notes:;; Only available if xref:stats[statistics calculation] is xref:concurrent_node_set_boost_unordered_enable_stats[enabled]. + +--- + +=== Deduction Guides +A deduction guide will not participate in overload resolution if any of the following are true: + + - It has an `InputIterator` template parameter and a type that does not qualify as an input iterator is deduced for that parameter. + - It has an `Allocator` template parameter and a type that does not qualify as an allocator is deduced for that parameter. + - It has a `Hash` template parameter and an integral type or a type that qualifies as an allocator is deduced for that parameter. + - It has a `Pred` template parameter and a type that qualifies as an allocator is deduced for that parameter. + +A `size_­type` parameter type in a deduction guide refers to the `size_­type` member type of the +container type deduced by the deduction guide. Its default value coincides with the default value +of the constructor selected. + +==== __iter-value-type__ +[listings,subs="+macros,+quotes"] +----- +template + using __iter-value-type__ = + typename std::iterator_traits::value_type; // exposition only +----- + +=== Equality Comparisons + +==== operator== +```c++ +template + bool operator==(const concurrent_node_set& x, + const concurrent_node_set& y); +``` + +Returns `true` if `x.size() == y.size()` and for every element in `x`, there is an element in `y` with the same key, with an equal value (using `operator==` to compare the value types). + +[horizontal] +Concurrency:;; Blocking on `x` and `y`. +Notes:;; Behavior is undefined if the two tables don't have equivalent equality predicates. + +--- + +==== operator!= +```c++ +template + bool operator!=(const concurrent_node_set& x, + const concurrent_node_set& y); +``` + +Returns `false` if `x.size() == y.size()` and for every element in `x`, there is an element in `y` with the same key, with an equal value (using `operator==` to compare the value types). + +[horizontal] +Concurrency:;; Blocking on `x` and `y`. +Notes:;; Behavior is undefined if the two tables don't have equivalent equality predicates. + +--- + +=== Swap +```c++ +template + void swap(concurrent_node_set& x, + concurrent_node_set& y) + noexcept(noexcept(x.swap(y))); +``` + +Equivalent to +[listing,subs="+macros,+quotes"] +----- +x.xref:#concurrent_node_set_swap[swap](y); +----- + +--- + +=== erase_if +```c++ +template + typename concurrent_node_set::size_type + erase_if(concurrent_node_set& c, Predicate pred); +``` + +Equivalent to +[listing,subs="+macros,+quotes"] +----- +c.xref:#concurrent_node_set_erase_if[erase_if](pred); +----- + +=== Serialization + +``concurrent_node_set``s can be archived/retrieved by means of +link:../../../serialization/index.html[Boost.Serialization^] using the API provided +by this library. Both regular and XML archives are supported. + +==== Saving an concurrent_node_set to an archive + +Saves all the elements of a `concurrent_node_set` `x` to an archive (XML archive) `ar`. + +[horizontal] +Requires:;; `value_type` is serializable (XML serializable), and it supports Boost.Serialization +`save_construct_data`/`load_construct_data` protocol (automatically suported by +https://en.cppreference.com/w/cpp/named_req/DefaultConstructible[DefaultConstructible^] +types). +Concurrency:;; Blocking on `x`. + +--- + +==== Loading an concurrent_node_set from an archive + +Deletes all preexisting elements of a `concurrent_node_set` `x` and inserts +from an archive (XML archive) `ar` restored copies of the elements of the +original `concurrent_node_set` `other` saved to the storage read by `ar`. + +[horizontal] +Requires:;; `x.key_equal()` is functionally equivalent to `other.key_equal()`. +Concurrency:;; Blocking on `x`. diff --git a/doc/unordered/intro.adoc b/doc/unordered/intro.adoc index 2c6dfbd3..907f59d9 100644 --- a/doc/unordered/intro.adoc +++ b/doc/unordered/intro.adoc @@ -43,7 +43,8 @@ boost::unordered_node_map boost::unordered_flat_map ^.^h|*Concurrent* -^| +^| `boost::concurrent_node_set` + +`boost::concurrent_node_map` ^| `boost::concurrent_flat_set` + `boost::concurrent_flat_map` @@ -59,6 +60,7 @@ There are two variants: **flat** (the fastest) and **node-based**, which provide pointer stability under rehashing at the expense of being slower. * Finally, **concurrent containers** are designed and implemented to be used in high-performance multithreaded scenarios. Their interface is radically different from that of regular C++ containers. +Flat and node-based variants are provided. All sets and maps in Boost.Unordered are instantiatied similarly as `std::unordered_set` and `std::unordered_map`, respectively: @@ -72,8 +74,8 @@ namespace boost { class Pred = std::equal_to, class Alloc = std::allocator > class unordered_set; - // same for unordered_multiset, unordered_flat_set, unordered_node_set - // and concurrent_flat_set + // same for unordered_multiset, unordered_flat_set, unordered_node_set, + // concurrent_flat_set and concurrent_node_set template < class Key, class Mapped, @@ -81,8 +83,8 @@ namespace boost { class Pred = std::equal_to, class Alloc = std::allocator > > class unordered_map; - // same for unordered_multimap, unordered_flat_map, unordered_node_map - // and concurrent_flat_map + // same for unordered_multimap, unordered_flat_map, unordered_node_map, + // concurrent_flat_map and concurrent_node_map } ---- diff --git a/doc/unordered/rationale.adoc b/doc/unordered/rationale.adoc index a531875f..29df32f0 100644 --- a/doc/unordered/rationale.adoc +++ b/doc/unordered/rationale.adoc @@ -121,7 +121,8 @@ for Visual Studio on an x64-mode Intel CPU with SSE2 and for GCC on an IBM s390x == Concurrent Containers The same data structure used by Boost.Unordered open-addressing containers has been chosen -also as the foundation of `boost::concurrent_flat_set` and `boost::concurrent_flat_map`: +also as the foundation of `boost::concurrent_flat_set`/`boost::concurrent_node_set` and +`boost::concurrent_flat_map`/`boost::concurrent_node_map`: * Open-addressing is faster than closed-addressing alternatives, both in non-concurrent and concurrent scenarios. @@ -130,7 +131,7 @@ with minimal locking. In particular, the metadata array can be used for implemen lookup that are lock-free up to the last step of actual element comparison. * Layout compatibility with Boost.Unordered flat containers allows for xref:#concurrent_interoperability_with_non_concurrent_containers[fast transfer] -of all elements between `boost::concurrent_flat_map` and `boost::unordered_flat_map`, +of all elements between a concurrent container and its non-concurrent counterpart, and vice versa. === Hash Function and Platform Interoperability diff --git a/doc/unordered/ref.adoc b/doc/unordered/ref.adoc index 6a0d22c4..9a1a5098 100644 --- a/doc/unordered/ref.adoc +++ b/doc/unordered/ref.adoc @@ -13,3 +13,5 @@ include::unordered_node_map.adoc[] include::unordered_node_set.adoc[] include::concurrent_flat_map.adoc[] include::concurrent_flat_set.adoc[] +include::concurrent_node_map.adoc[] +include::concurrent_node_set.adoc[] diff --git a/doc/unordered/structures.adoc b/doc/unordered/structures.adoc index 5b1521fa..66214340 100644 --- a/doc/unordered/structures.adoc +++ b/doc/unordered/structures.adoc @@ -129,7 +129,8 @@ xref:#rationale_open_addresing_containers[corresponding section]. == Concurrent Containers -`boost::concurrent_flat_set` and `boost::concurrent_flat_map` use the basic +`boost::concurrent_flat_set`/`boost::concurrent_node_set` and +`boost::concurrent_flat_map`/`boost::concurrent_node_map` use the basic xref:#structures_open_addressing_containers[open-addressing layout] described above augmented with synchronization mechanisms. diff --git a/doc/unordered/unordered_node_map.adoc b/doc/unordered/unordered_node_map.adoc index f5f7d77f..21f7c392 100644 --- a/doc/unordered/unordered_node_map.adoc +++ b/doc/unordered/unordered_node_map.adoc @@ -78,6 +78,7 @@ namespace boost { explicit xref:#unordered_node_map_allocator_constructor[unordered_node_map](const Allocator& a); xref:#unordered_node_map_copy_constructor_with_allocator[unordered_node_map](const unordered_node_map& other, const Allocator& a); xref:#unordered_node_map_move_constructor_with_allocator[unordered_node_map](unordered_node_map&& other, const Allocator& a); + xref:#unordered_node_map_move_constructor_from_concurrent_node_map[unordered_node_map](concurrent_node_map&& other); xref:#unordered_node_map_initializer_list_constructor[unordered_node_map](std::initializer_list il, size_type n = _implementation-defined_ const hasher& hf = hasher(), @@ -537,6 +538,24 @@ and always calls `other.reset_stats()`. --- +==== Move Constructor from concurrent_node_map + +```c++ +unordered_node_map(concurrent_node_map&& other); +``` + +Move construction from a xref:#concurrent_node_map[`concurrent_node_map`]. +The internal bucket array of `other` is transferred directly to the new container. +The hash function, predicate and allocator are moved-constructed from `other`. +If statistics are xref:unordered_node_map_boost_unordered_enable_stats[enabled], +transfers the internal statistical information from `other` and calls `other.reset_stats()`. + +[horizontal] +Complexity:;; Constant time. +Concurrency:;; Blocking on `other`. + +--- + ==== Initializer List Constructor [source,c++,subs="+quotes"] ---- @@ -1219,8 +1238,8 @@ Throws:;; Nothing. ==== Extract by Key ```c++ -node_type erase(const key_type& k); -template node_type erase(K&& k); +node_type extract(const key_type& k); +template node_type extract(K&& k); ``` Extracts the element with key equivalent to `k`, if it exists. @@ -1228,7 +1247,7 @@ Extracts the element with key equivalent to `k`, if it exists. [horizontal] Returns:;; A `node_type` object holding the extracted element, or empty if no element was extracted. Throws:;; Only throws an exception if it is thrown by `hasher` or `key_equal`. -Notes:;; The `template` overload only participates in overload resolution if `Hash::is_transparent` and `Pred::is_transparent` are valid member typedefs and neither `iterator` nor `const_iterator` are implicitly convertible from `K`. The library assumes that `Hash` is callable with both `K` and `Key` and that `Pred` is transparent. This enables heterogeneous lookup which avoids the cost of instantiating an instance of the `Key` type. +Notes:;; The `template` overload only participates in overload resolution if `Hash::is_transparent` and `Pred::is_transparent` are valid member typedefs. The library assumes that `Hash` is callable with both `K` and `Key` and that `Pred` is transparent. This enables heterogeneous lookup which avoids the cost of instantiating an instance of the `Key` type. --- diff --git a/doc/unordered/unordered_node_set.adoc b/doc/unordered/unordered_node_set.adoc index af1255d5..7b52cc1b 100644 --- a/doc/unordered/unordered_node_set.adoc +++ b/doc/unordered/unordered_node_set.adoc @@ -73,6 +73,7 @@ namespace boost { explicit xref:#unordered_node_set_allocator_constructor[unordered_node_set](const Allocator& a); xref:#unordered_node_set_copy_constructor_with_allocator[unordered_node_set](const unordered_node_set& other, const Allocator& a); xref:#unordered_node_set_move_constructor_with_allocator[unordered_node_set](unordered_node_set&& other, const Allocator& a); + xref:#unordered_node_set_move_constructor_from_concurrent_node_set[unordered_node_set](concurrent_node_set&& other); xref:#unordered_node_set_initializer_list_constructor[unordered_node_set](std::initializer_list il, size_type n = _implementation-defined_ const hasher& hf = hasher(), @@ -489,6 +490,24 @@ and always calls `other.reset_stats()`. --- +==== Move Constructor from concurrent_node_set + +```c++ +unordered_node_set(concurrent_node_set&& other); +``` + +Move construction from a xref:#concurrent_node_set[`concurrent_node_set`]. +The internal bucket array of `other` is transferred directly to the new container. +The hash function, predicate and allocator are moved-constructed from `other`. +If statistics are xref:unordered_node_set_boost_unordered_enable_stats[enabled], +transfers the internal statistical information from `other` and calls `other.reset_stats()`. + +[horizontal] +Complexity:;; Constant time. +Concurrency:;; Blocking on `other`. + +--- + ==== Initializer List Constructor [source,c++,subs="+quotes"] ---- @@ -1028,8 +1047,8 @@ Throws:;; Nothing. ==== Extract by Key ```c++ -node_type erase(const key_type& k); -template node_type erase(K&& k); +node_type extract(const key_type& k); +template node_type extract(K&& k); ``` Extracts the element with key equivalent to `k`, if it exists. @@ -1037,7 +1056,7 @@ Extracts the element with key equivalent to `k`, if it exists. [horizontal] Returns:;; A `node_type` object holding the extracted element, or empty if no element was extracted. Throws:;; Only throws an exception if it is thrown by `hasher` or `key_equal`. -Notes:;; The `template` overload only participates in overload resolution if `Hash::is_transparent` and `Pred::is_transparent` are valid member typedefs and neither `iterator` nor `const_iterator` are implicitly convertible from `K`. The library assumes that `Hash` is callable with both `K` and `Key` and that `Pred` is transparent. This enables heterogeneous lookup which avoids the cost of instantiating an instance of the `Key` type. +Notes:;; The `template` overload only participates in overload resolution if `Hash::is_transparent` and `Pred::is_transparent` are valid member typedefs. The library assumes that `Hash` is callable with both `K` and `Key` and that `Pred` is transparent. This enables heterogeneous lookup which avoids the cost of instantiating an instance of the `Key` type. --- diff --git a/extra/boost_unordered.natvis b/extra/boost_unordered.natvis index 75df78c7..586caee6 100644 --- a/extra/boost_unordered.natvis +++ b/extra/boost_unordered.natvis @@ -421,6 +421,8 @@ file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt) + + {{ size={table_.size_ctrl.size} }} *reinterpret_cast<hasher*>(static_cast<table_type::super::hash_base*>(&table_)) @@ -433,6 +435,7 @@ file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt) + {{ size={table_.size_ctrl.size} }} *reinterpret_cast<hasher*>(static_cast<table_type::super::hash_base*>(&table_)) diff --git a/include/boost/unordered/concurrent_node_map.hpp b/include/boost/unordered/concurrent_node_map.hpp new file mode 100644 index 00000000..35abd653 --- /dev/null +++ b/include/boost/unordered/concurrent_node_map.hpp @@ -0,0 +1,975 @@ +/* Fast open-addressing, node-based concurrent hashmap. + * + * Copyright 2023 Christian Mazakas. + * Copyright 2023-2024 Joaquin M Lopez Munoz. + * Distributed under the Boost Software License, Version 1.0. + * (See accompanying file LICENSE_1_0.txt or copy at + * http://www.boost.org/LICENSE_1_0.txt) + * + * See https://www.boost.org/libs/unordered for library home page. + */ + +#ifndef BOOST_UNORDERED_CONCURRENT_NODE_MAP_HPP +#define BOOST_UNORDERED_CONCURRENT_NODE_MAP_HPP + +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include + +#include + +namespace boost { + namespace unordered { + template + class concurrent_node_map + { + private: + template + friend class concurrent_node_map; + template + friend class unordered_node_map; + + using type_policy = detail::foa::node_map_types::type>; + + using table_type = + detail::foa::concurrent_table; + + table_type table_; + + template + bool friend operator==(concurrent_node_map const& lhs, + concurrent_node_map const& rhs); + + template + friend typename concurrent_node_map::size_type erase_if( + concurrent_node_map& set, Predicate pred); + + template + friend void serialize( + Archive& ar, concurrent_node_map& c, + unsigned int version); + + public: + using key_type = Key; + using mapped_type = T; + using value_type = typename type_policy::value_type; + using init_type = typename type_policy::init_type; + using size_type = std::size_t; + using difference_type = std::ptrdiff_t; + using hasher = typename boost::unordered::detail::type_identity::type; + using key_equal = typename boost::unordered::detail::type_identity::type; + using allocator_type = typename boost::unordered::detail::type_identity::type; + using reference = value_type&; + using const_reference = value_type const&; + using pointer = typename boost::allocator_pointer::type; + using const_pointer = + typename boost::allocator_const_pointer::type; + using node_type = detail::foa::node_map_handle::type>; + using insert_return_type = + detail::foa::iteratorless_insert_return_type; + static constexpr size_type bulk_visit_size = table_type::bulk_visit_size; + +#if defined(BOOST_UNORDERED_ENABLE_STATS) + using stats = typename table_type::stats; +#endif + + concurrent_node_map() + : concurrent_node_map(detail::foa::default_bucket_count) + { + } + + explicit concurrent_node_map(size_type n, const hasher& hf = hasher(), + const key_equal& eql = key_equal(), + const allocator_type& a = allocator_type()) + : table_(n, hf, eql, a) + { + } + + template + concurrent_node_map(InputIterator f, InputIterator l, + size_type n = detail::foa::default_bucket_count, + const hasher& hf = hasher(), const key_equal& eql = key_equal(), + const allocator_type& a = allocator_type()) + : table_(n, hf, eql, a) + { + this->insert(f, l); + } + + concurrent_node_map(concurrent_node_map const& rhs) + : table_(rhs.table_, + boost::allocator_select_on_container_copy_construction( + rhs.get_allocator())) + { + } + + concurrent_node_map(concurrent_node_map&& rhs) + : table_(std::move(rhs.table_)) + { + } + + template + concurrent_node_map( + InputIterator f, InputIterator l, allocator_type const& a) + : concurrent_node_map(f, l, 0, hasher(), key_equal(), a) + { + } + + explicit concurrent_node_map(allocator_type const& a) + : table_(detail::foa::default_bucket_count, hasher(), key_equal(), a) + { + } + + concurrent_node_map( + concurrent_node_map const& rhs, allocator_type const& a) + : table_(rhs.table_, a) + { + } + + concurrent_node_map(concurrent_node_map&& rhs, allocator_type const& a) + : table_(std::move(rhs.table_), a) + { + } + + concurrent_node_map(std::initializer_list il, + size_type n = detail::foa::default_bucket_count, + const hasher& hf = hasher(), const key_equal& eql = key_equal(), + const allocator_type& a = allocator_type()) + : concurrent_node_map(n, hf, eql, a) + { + this->insert(il.begin(), il.end()); + } + + concurrent_node_map(size_type n, const allocator_type& a) + : concurrent_node_map(n, hasher(), key_equal(), a) + { + } + + concurrent_node_map( + size_type n, const hasher& hf, const allocator_type& a) + : concurrent_node_map(n, hf, key_equal(), a) + { + } + + template + concurrent_node_map( + InputIterator f, InputIterator l, size_type n, const allocator_type& a) + : concurrent_node_map(f, l, n, hasher(), key_equal(), a) + { + } + + template + concurrent_node_map(InputIterator f, InputIterator l, size_type n, + const hasher& hf, const allocator_type& a) + : concurrent_node_map(f, l, n, hf, key_equal(), a) + { + } + + concurrent_node_map( + std::initializer_list il, const allocator_type& a) + : concurrent_node_map( + il, detail::foa::default_bucket_count, hasher(), key_equal(), a) + { + } + + concurrent_node_map(std::initializer_list il, size_type n, + const allocator_type& a) + : concurrent_node_map(il, n, hasher(), key_equal(), a) + { + } + + concurrent_node_map(std::initializer_list il, size_type n, + const hasher& hf, const allocator_type& a) + : concurrent_node_map(il, n, hf, key_equal(), a) + { + } + + concurrent_node_map( + unordered_node_map&& other) + : table_(std::move(other.table_)) + { + } + + ~concurrent_node_map() = default; + + concurrent_node_map& operator=(concurrent_node_map const& rhs) + { + table_ = rhs.table_; + return *this; + } + + concurrent_node_map& operator=(concurrent_node_map&& rhs) noexcept( + noexcept(std::declval() = std::declval())) + { + table_ = std::move(rhs.table_); + return *this; + } + + concurrent_node_map& operator=(std::initializer_list ilist) + { + table_ = ilist; + return *this; + } + + /// Capacity + /// + + size_type size() const noexcept { return table_.size(); } + size_type max_size() const noexcept { return table_.max_size(); } + + BOOST_ATTRIBUTE_NODISCARD bool empty() const noexcept + { + return size() == 0; + } + + template + BOOST_FORCEINLINE size_type visit(key_type const& k, F f) + { + BOOST_UNORDERED_STATIC_ASSERT_INVOCABLE(F) + return table_.visit(k, f); + } + + template + BOOST_FORCEINLINE size_type visit(key_type const& k, F f) const + { + BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F) + return table_.visit(k, f); + } + + template + BOOST_FORCEINLINE size_type cvisit(key_type const& k, F f) const + { + BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F) + return table_.visit(k, f); + } + + template + BOOST_FORCEINLINE typename std::enable_if< + detail::are_transparent::value, size_type>::type + visit(K&& k, F f) + { + BOOST_UNORDERED_STATIC_ASSERT_INVOCABLE(F) + return table_.visit(std::forward(k), f); + } + + template + BOOST_FORCEINLINE typename std::enable_if< + detail::are_transparent::value, size_type>::type + visit(K&& k, F f) const + { + BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F) + return table_.visit(std::forward(k), f); + } + + template + BOOST_FORCEINLINE typename std::enable_if< + detail::are_transparent::value, size_type>::type + cvisit(K&& k, F f) const + { + BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F) + return table_.visit(std::forward(k), f); + } + + template + BOOST_FORCEINLINE + size_t visit(FwdIterator first, FwdIterator last, F f) + { + BOOST_UNORDERED_STATIC_ASSERT_BULK_VISIT_ITERATOR(FwdIterator) + BOOST_UNORDERED_STATIC_ASSERT_INVOCABLE(F) + return table_.visit(first, last, f); + } + + template + BOOST_FORCEINLINE + size_t visit(FwdIterator first, FwdIterator last, F f) const + { + BOOST_UNORDERED_STATIC_ASSERT_BULK_VISIT_ITERATOR(FwdIterator) + BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F) + return table_.visit(first, last, f); + } + + template + BOOST_FORCEINLINE + size_t cvisit(FwdIterator first, FwdIterator last, F f) const + { + BOOST_UNORDERED_STATIC_ASSERT_BULK_VISIT_ITERATOR(FwdIterator) + BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F) + return table_.visit(first, last, f); + } + + template size_type visit_all(F f) + { + BOOST_UNORDERED_STATIC_ASSERT_INVOCABLE(F) + return table_.visit_all(f); + } + + template size_type visit_all(F f) const + { + BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F) + return table_.visit_all(f); + } + + template size_type cvisit_all(F f) const + { + BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F) + return table_.cvisit_all(f); + } + +#if defined(BOOST_UNORDERED_PARALLEL_ALGORITHMS) + template + typename std::enable_if::value, + void>::type + visit_all(ExecPolicy&& p, F f) + { + BOOST_UNORDERED_STATIC_ASSERT_INVOCABLE(F) + BOOST_UNORDERED_STATIC_ASSERT_EXEC_POLICY(ExecPolicy) + table_.visit_all(p, f); + } + + template + typename std::enable_if::value, + void>::type + visit_all(ExecPolicy&& p, F f) const + { + BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F) + BOOST_UNORDERED_STATIC_ASSERT_EXEC_POLICY(ExecPolicy) + table_.visit_all(p, f); + } + + template + typename std::enable_if::value, + void>::type + cvisit_all(ExecPolicy&& p, F f) const + { + BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F) + BOOST_UNORDERED_STATIC_ASSERT_EXEC_POLICY(ExecPolicy) + table_.cvisit_all(p, f); + } +#endif + + template bool visit_while(F f) + { + BOOST_UNORDERED_STATIC_ASSERT_INVOCABLE(F) + return table_.visit_while(f); + } + + template bool visit_while(F f) const + { + BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F) + return table_.visit_while(f); + } + + template bool cvisit_while(F f) const + { + BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F) + return table_.cvisit_while(f); + } + +#if defined(BOOST_UNORDERED_PARALLEL_ALGORITHMS) + template + typename std::enable_if::value, + bool>::type + visit_while(ExecPolicy&& p, F f) + { + BOOST_UNORDERED_STATIC_ASSERT_INVOCABLE(F) + BOOST_UNORDERED_STATIC_ASSERT_EXEC_POLICY(ExecPolicy) + return table_.visit_while(p, f); + } + + template + typename std::enable_if::value, + bool>::type + visit_while(ExecPolicy&& p, F f) const + { + BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F) + BOOST_UNORDERED_STATIC_ASSERT_EXEC_POLICY(ExecPolicy) + return table_.visit_while(p, f); + } + + template + typename std::enable_if::value, + bool>::type + cvisit_while(ExecPolicy&& p, F f) const + { + BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F) + BOOST_UNORDERED_STATIC_ASSERT_EXEC_POLICY(ExecPolicy) + return table_.cvisit_while(p, f); + } +#endif + + /// Modifiers + /// + + template + BOOST_FORCEINLINE auto insert(Ty&& value) + -> decltype(table_.insert(std::forward(value))) + { + return table_.insert(std::forward(value)); + } + + BOOST_FORCEINLINE bool insert(init_type&& obj) + { + return table_.insert(std::move(obj)); + } + + template + void insert(InputIterator begin, InputIterator end) + { + for (auto pos = begin; pos != end; ++pos) { + table_.emplace(*pos); + } + } + + void insert(std::initializer_list ilist) + { + this->insert(ilist.begin(), ilist.end()); + } + + insert_return_type insert(node_type&& nh) + { + using access = detail::foa::node_handle_access; + + if (nh.empty()) { + return {false, node_type{}}; + } + + // Caveat: get_allocator() incurs synchronization (not cheap) + BOOST_ASSERT(get_allocator() == nh.get_allocator()); + + if (table_.insert(std::move(access::element(nh)))) { + access::reset(nh); + return {true, node_type{}}; + } else { + return {false, std::move(nh)}; + } + } + + template + BOOST_FORCEINLINE bool insert_or_assign(key_type const& k, M&& obj) + { + return table_.try_emplace_or_visit(k, std::forward(obj), + [&](value_type& m) { m.second = std::forward(obj); }); + } + + template + BOOST_FORCEINLINE bool insert_or_assign(key_type&& k, M&& obj) + { + return table_.try_emplace_or_visit(std::move(k), std::forward(obj), + [&](value_type& m) { m.second = std::forward(obj); }); + } + + template + BOOST_FORCEINLINE typename std::enable_if< + detail::are_transparent::value, bool>::type + insert_or_assign(K&& k, M&& obj) + { + return table_.try_emplace_or_visit(std::forward(k), + std::forward(obj), + [&](value_type& m) { m.second = std::forward(obj); }); + } + + template + BOOST_FORCEINLINE auto insert_or_visit(Ty&& value, F f) + -> decltype(table_.insert_or_visit(std::forward(value), f)) + { + BOOST_UNORDERED_STATIC_ASSERT_INVOCABLE(F) + return table_.insert_or_visit(std::forward(value), f); + } + + template + BOOST_FORCEINLINE bool insert_or_visit(init_type&& obj, F f) + { + BOOST_UNORDERED_STATIC_ASSERT_INVOCABLE(F) + return table_.insert_or_visit(std::move(obj), f); + } + + template + void insert_or_visit(InputIterator first, InputIterator last, F f) + { + BOOST_UNORDERED_STATIC_ASSERT_INVOCABLE(F) + for (; first != last; ++first) { + table_.emplace_or_visit(*first, f); + } + } + + template + void insert_or_visit(std::initializer_list ilist, F f) + { + BOOST_UNORDERED_STATIC_ASSERT_INVOCABLE(F) + this->insert_or_visit(ilist.begin(), ilist.end(), f); + } + + template + insert_return_type insert_or_visit(node_type&& nh, F f) + { + BOOST_UNORDERED_STATIC_ASSERT_INVOCABLE(F) + using access = detail::foa::node_handle_access; + + if (nh.empty()) { + return {false, node_type{}}; + } + + // Caveat: get_allocator() incurs synchronization (not cheap) + BOOST_ASSERT(get_allocator() == nh.get_allocator()); + + if (table_.insert_or_visit(std::move(access::element(nh)), f)) { + access::reset(nh); + return {true, node_type{}}; + } else { + return {false, std::move(nh)}; + } + } + + template + BOOST_FORCEINLINE auto insert_or_cvisit(Ty&& value, F f) + -> decltype(table_.insert_or_cvisit(std::forward(value), f)) + { + BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F) + return table_.insert_or_cvisit(std::forward(value), f); + } + + template + BOOST_FORCEINLINE bool insert_or_cvisit(init_type&& obj, F f) + { + BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F) + return table_.insert_or_cvisit(std::move(obj), f); + } + + template + void insert_or_cvisit(InputIterator first, InputIterator last, F f) + { + BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F) + for (; first != last; ++first) { + table_.emplace_or_cvisit(*first, f); + } + } + + template + void insert_or_cvisit(std::initializer_list ilist, F f) + { + BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F) + this->insert_or_cvisit(ilist.begin(), ilist.end(), f); + } + + template + insert_return_type insert_or_cvisit(node_type&& nh, F f) + { + BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F) + using access = detail::foa::node_handle_access; + + if (nh.empty()) { + return {false, node_type{}}; + } + + // Caveat: get_allocator() incurs synchronization (not cheap) + BOOST_ASSERT(get_allocator() == nh.get_allocator()); + + if (table_.insert_or_cvisit(std::move(access::element(nh)), f)) { + access::reset(nh); + return {true, node_type{}}; + } else { + return {false, std::move(nh)}; + } + } + + template BOOST_FORCEINLINE bool emplace(Args&&... args) + { + return table_.emplace(std::forward(args)...); + } + + template + BOOST_FORCEINLINE bool emplace_or_visit(Arg&& arg, Args&&... args) + { + BOOST_UNORDERED_STATIC_ASSERT_LAST_ARG_INVOCABLE(Arg, Args...) + return table_.emplace_or_visit( + std::forward(arg), std::forward(args)...); + } + + template + BOOST_FORCEINLINE bool emplace_or_cvisit(Arg&& arg, Args&&... args) + { + BOOST_UNORDERED_STATIC_ASSERT_LAST_ARG_CONST_INVOCABLE(Arg, Args...) + return table_.emplace_or_cvisit( + std::forward(arg), std::forward(args)...); + } + + template + BOOST_FORCEINLINE bool try_emplace(key_type const& k, Args&&... args) + { + return table_.try_emplace(k, std::forward(args)...); + } + + template + BOOST_FORCEINLINE bool try_emplace(key_type&& k, Args&&... args) + { + return table_.try_emplace(std::move(k), std::forward(args)...); + } + + template + BOOST_FORCEINLINE typename std::enable_if< + detail::are_transparent::value, bool>::type + try_emplace(K&& k, Args&&... args) + { + return table_.try_emplace( + std::forward(k), std::forward(args)...); + } + + template + BOOST_FORCEINLINE bool try_emplace_or_visit( + key_type const& k, Arg&& arg, Args&&... args) + { + BOOST_UNORDERED_STATIC_ASSERT_LAST_ARG_INVOCABLE(Arg, Args...) + return table_.try_emplace_or_visit( + k, std::forward(arg), std::forward(args)...); + } + + template + BOOST_FORCEINLINE bool try_emplace_or_cvisit( + key_type const& k, Arg&& arg, Args&&... args) + { + BOOST_UNORDERED_STATIC_ASSERT_LAST_ARG_CONST_INVOCABLE(Arg, Args...) + return table_.try_emplace_or_cvisit( + k, std::forward(arg), std::forward(args)...); + } + + template + BOOST_FORCEINLINE bool try_emplace_or_visit( + key_type&& k, Arg&& arg, Args&&... args) + { + BOOST_UNORDERED_STATIC_ASSERT_LAST_ARG_INVOCABLE(Arg, Args...) + return table_.try_emplace_or_visit( + std::move(k), std::forward(arg), std::forward(args)...); + } + + template + BOOST_FORCEINLINE bool try_emplace_or_cvisit( + key_type&& k, Arg&& arg, Args&&... args) + { + BOOST_UNORDERED_STATIC_ASSERT_LAST_ARG_CONST_INVOCABLE(Arg, Args...) + return table_.try_emplace_or_cvisit( + std::move(k), std::forward(arg), std::forward(args)...); + } + + template + BOOST_FORCEINLINE bool try_emplace_or_visit( + K&& k, Arg&& arg, Args&&... args) + { + BOOST_UNORDERED_STATIC_ASSERT_LAST_ARG_INVOCABLE(Arg, Args...) + return table_.try_emplace_or_visit(std::forward(k), + std::forward(arg), std::forward(args)...); + } + + template + BOOST_FORCEINLINE bool try_emplace_or_cvisit( + K&& k, Arg&& arg, Args&&... args) + { + BOOST_UNORDERED_STATIC_ASSERT_LAST_ARG_CONST_INVOCABLE(Arg, Args...) + return table_.try_emplace_or_cvisit(std::forward(k), + std::forward(arg), std::forward(args)...); + } + + BOOST_FORCEINLINE size_type erase(key_type const& k) + { + return table_.erase(k); + } + + template + BOOST_FORCEINLINE typename std::enable_if< + detail::are_transparent::value, size_type>::type + erase(K&& k) + { + return table_.erase(std::forward(k)); + } + + template + BOOST_FORCEINLINE size_type erase_if(key_type const& k, F f) + { + return table_.erase_if(k, f); + } + + template + BOOST_FORCEINLINE typename std::enable_if< + detail::are_transparent::value && + !detail::is_execution_policy::value, + size_type>::type + erase_if(K&& k, F f) + { + return table_.erase_if(std::forward(k), f); + } + +#if defined(BOOST_UNORDERED_PARALLEL_ALGORITHMS) + template + typename std::enable_if::value, + void>::type + erase_if(ExecPolicy&& p, F f) + { + BOOST_UNORDERED_STATIC_ASSERT_EXEC_POLICY(ExecPolicy) + table_.erase_if(p, f); + } +#endif + + template size_type erase_if(F f) { return table_.erase_if(f); } + + void swap(concurrent_node_map& other) noexcept( + boost::allocator_is_always_equal::type::value || + boost::allocator_propagate_on_container_swap::type::value) + { + return table_.swap(other.table_); + } + + node_type extract(key_type const& key) + { + node_type nh; + table_.extract(key, detail::foa::node_handle_emplacer(nh)); + return nh; + } + + template + typename std::enable_if< + detail::are_transparent::value, node_type>::type + extract(K const& key) + { + node_type nh; + table_.extract(key, detail::foa::node_handle_emplacer(nh)); + return nh; + } + + template + node_type extract_if(key_type const& key, F f) + { + node_type nh; + table_.extract_if(key, f, detail::foa::node_handle_emplacer(nh)); + return nh; + } + + template + typename std::enable_if< + detail::are_transparent::value, node_type>::type + extract_if(K const& key, F f) + { + node_type nh; + table_.extract_if(key, f, detail::foa::node_handle_emplacer(nh)); + return nh; + } + + void clear() noexcept { table_.clear(); } + + template + size_type merge(concurrent_node_map& x) + { + BOOST_ASSERT(get_allocator() == x.get_allocator()); + return table_.merge(x.table_); + } + + template + size_type merge(concurrent_node_map&& x) + { + return merge(x); + } + + BOOST_FORCEINLINE size_type count(key_type const& k) const + { + return table_.count(k); + } + + template + BOOST_FORCEINLINE typename std::enable_if< + detail::are_transparent::value, size_type>::type + count(K const& k) + { + return table_.count(k); + } + + BOOST_FORCEINLINE bool contains(key_type const& k) const + { + return table_.contains(k); + } + + template + BOOST_FORCEINLINE typename std::enable_if< + detail::are_transparent::value, bool>::type + contains(K const& k) const + { + return table_.contains(k); + } + + /// Hash Policy + /// + size_type bucket_count() const noexcept { return table_.capacity(); } + + float load_factor() const noexcept { return table_.load_factor(); } + float max_load_factor() const noexcept + { + return table_.max_load_factor(); + } + void max_load_factor(float) {} + size_type max_load() const noexcept { return table_.max_load(); } + + void rehash(size_type n) { table_.rehash(n); } + void reserve(size_type n) { table_.reserve(n); } + +#if defined(BOOST_UNORDERED_ENABLE_STATS) + /// Stats + /// + stats get_stats() const { return table_.get_stats(); } + + void reset_stats() noexcept { table_.reset_stats(); } +#endif + + /// Observers + /// + allocator_type get_allocator() const noexcept + { + return table_.get_allocator(); + } + + hasher hash_function() const { return table_.hash_function(); } + key_equal key_eq() const { return table_.key_eq(); } + }; + + template + bool operator==( + concurrent_node_map const& lhs, + concurrent_node_map const& rhs) + { + return lhs.table_ == rhs.table_; + } + + template + bool operator!=( + concurrent_node_map const& lhs, + concurrent_node_map const& rhs) + { + return !(lhs == rhs); + } + + template + void swap(concurrent_node_map& x, + concurrent_node_map& y) + noexcept(noexcept(x.swap(y))) + { + x.swap(y); + } + + template + typename concurrent_node_map::size_type erase_if( + concurrent_node_map& c, Predicate pred) + { + return c.table_.erase_if(pred); + } + + template + void serialize( + Archive& ar, concurrent_node_map& c, unsigned int) + { + ar & core::make_nvp("table",c.table_); + } + +#if BOOST_UNORDERED_TEMPLATE_DEDUCTION_GUIDES + + template >, + class Pred = + std::equal_to >, + class Allocator = std::allocator< + boost::unordered::detail::iter_to_alloc_t >, + class = std::enable_if_t >, + class = std::enable_if_t >, + class = std::enable_if_t >, + class = std::enable_if_t > > + concurrent_node_map(InputIterator, InputIterator, + std::size_t = boost::unordered::detail::foa::default_bucket_count, + Hash = Hash(), Pred = Pred(), Allocator = Allocator()) + -> concurrent_node_map< + boost::unordered::detail::iter_key_t, + boost::unordered::detail::iter_val_t, Hash, Pred, + Allocator>; + + template >, + class Pred = std::equal_to >, + class Allocator = std::allocator >, + class = std::enable_if_t >, + class = std::enable_if_t >, + class = std::enable_if_t > > + concurrent_node_map(std::initializer_list >, + std::size_t = boost::unordered::detail::foa::default_bucket_count, + Hash = Hash(), Pred = Pred(), Allocator = Allocator()) + -> concurrent_node_map, T, Hash, Pred, + Allocator>; + + template >, + class = std::enable_if_t > > + concurrent_node_map(InputIterator, InputIterator, std::size_t, Allocator) + -> concurrent_node_map< + boost::unordered::detail::iter_key_t, + boost::unordered::detail::iter_val_t, + boost::hash >, + std::equal_to >, + Allocator>; + + template >, + class = std::enable_if_t > > + concurrent_node_map(InputIterator, InputIterator, Allocator) + -> concurrent_node_map< + boost::unordered::detail::iter_key_t, + boost::unordered::detail::iter_val_t, + boost::hash >, + std::equal_to >, + Allocator>; + + template >, + class = std::enable_if_t >, + class = std::enable_if_t > > + concurrent_node_map( + InputIterator, InputIterator, std::size_t, Hash, Allocator) + -> concurrent_node_map< + boost::unordered::detail::iter_key_t, + boost::unordered::detail::iter_val_t, Hash, + std::equal_to >, + Allocator>; + + template > > + concurrent_node_map(std::initializer_list >, std::size_t, + Allocator) -> concurrent_node_map, T, + boost::hash >, + std::equal_to >, Allocator>; + + template > > + concurrent_node_map(std::initializer_list >, Allocator) + -> concurrent_node_map, T, + boost::hash >, + std::equal_to >, Allocator>; + + template >, + class = std::enable_if_t > > + concurrent_node_map(std::initializer_list >, std::size_t, + Hash, Allocator) -> concurrent_node_map, T, + Hash, std::equal_to >, Allocator>; + +#endif + + } // namespace unordered +} // namespace boost + +#endif // BOOST_UNORDERED_CONCURRENT_NODE_MAP_HPP diff --git a/include/boost/unordered/concurrent_node_map_fwd.hpp b/include/boost/unordered/concurrent_node_map_fwd.hpp new file mode 100644 index 00000000..dd1f6fc0 --- /dev/null +++ b/include/boost/unordered/concurrent_node_map_fwd.hpp @@ -0,0 +1,67 @@ +/* Fast open-addressing, node-based concurrent hashmap. + * + * Copyright 2023 Christian Mazakas. + * Copyright 2024 Braden Ganetsky. + * Copyright 2024 Joaquin M Lopez Munoz. + * Distributed under the Boost Software License, Version 1.0. + * (See accompanying file LICENSE_1_0.txt or copy at + * http://www.boost.org/LICENSE_1_0.txt) + * + * See https://www.boost.org/libs/unordered for library home page. + */ + +#ifndef BOOST_UNORDERED_CONCURRENT_NODE_MAP_FWD_HPP +#define BOOST_UNORDERED_CONCURRENT_NODE_MAP_FWD_HPP + +#include +#include + +#include +#include + +#ifndef BOOST_NO_CXX17_HDR_MEMORY_RESOURCE +#include +#endif + +namespace boost { + namespace unordered { + + template , + class Pred = std::equal_to, + class Allocator = std::allocator > > + class concurrent_node_map; + + template + bool operator==( + concurrent_node_map const& lhs, + concurrent_node_map const& rhs); + + template + bool operator!=( + concurrent_node_map const& lhs, + concurrent_node_map const& rhs); + + template + void swap(concurrent_node_map& x, + concurrent_node_map& y) + noexcept(noexcept(x.swap(y))); + + template + typename concurrent_node_map::size_type erase_if( + concurrent_node_map& c, Predicate pred); + +#ifndef BOOST_NO_CXX17_HDR_MEMORY_RESOURCE + namespace pmr { + template , + class Pred = std::equal_to > + using concurrent_node_map = boost::unordered::concurrent_node_map > >; + } // namespace pmr +#endif + + } // namespace unordered + + using boost::unordered::concurrent_node_map; +} // namespace boost + +#endif // BOOST_UNORDERED_CONCURRENT_NODE_MAP_FWD_HPP diff --git a/include/boost/unordered/concurrent_node_set.hpp b/include/boost/unordered/concurrent_node_set.hpp new file mode 100644 index 00000000..81782a11 --- /dev/null +++ b/include/boost/unordered/concurrent_node_set.hpp @@ -0,0 +1,888 @@ +/* Fast open-addressing, node-based concurrent hashset. + * + * Copyright 2023 Christian Mazakas. + * Copyright 2023-2024 Joaquin M Lopez Munoz. + * Distributed under the Boost Software License, Version 1.0. + * (See accompanying file LICENSE_1_0.txt or copy at + * http://www.boost.org/LICENSE_1_0.txt) + * + * See https://www.boost.org/libs/unordered for library home page. + */ + +#ifndef BOOST_UNORDERED_CONCURRENT_NODE_SET_HPP +#define BOOST_UNORDERED_CONCURRENT_NODE_SET_HPP + +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include + +#include + +namespace boost { + namespace unordered { + template + class concurrent_node_set + { + private: + template + friend class concurrent_node_set; + template + friend class unordered_node_set; + + using type_policy = detail::foa::node_set_types::type>; + + using table_type = + detail::foa::concurrent_table; + + table_type table_; + + template + bool friend operator==(concurrent_node_set const& lhs, + concurrent_node_set const& rhs); + + template + friend typename concurrent_node_set::size_type erase_if( + concurrent_node_set& set, Predicate pred); + + template + friend void serialize( + Archive& ar, concurrent_node_set& c, + unsigned int version); + + public: + using key_type = Key; + using value_type = typename type_policy::value_type; + using init_type = typename type_policy::init_type; + using size_type = std::size_t; + using difference_type = std::ptrdiff_t; + using hasher = typename boost::unordered::detail::type_identity::type; + using key_equal = typename boost::unordered::detail::type_identity::type; + using allocator_type = typename boost::unordered::detail::type_identity::type; + using reference = value_type&; + using const_reference = value_type const&; + using pointer = typename boost::allocator_pointer::type; + using const_pointer = + typename boost::allocator_const_pointer::type; + using node_type = detail::foa::node_set_handle::type>; + using insert_return_type = + detail::foa::iteratorless_insert_return_type; + static constexpr size_type bulk_visit_size = table_type::bulk_visit_size; + +#if defined(BOOST_UNORDERED_ENABLE_STATS) + using stats = typename table_type::stats; +#endif + + concurrent_node_set() + : concurrent_node_set(detail::foa::default_bucket_count) + { + } + + explicit concurrent_node_set(size_type n, const hasher& hf = hasher(), + const key_equal& eql = key_equal(), + const allocator_type& a = allocator_type()) + : table_(n, hf, eql, a) + { + } + + template + concurrent_node_set(InputIterator f, InputIterator l, + size_type n = detail::foa::default_bucket_count, + const hasher& hf = hasher(), const key_equal& eql = key_equal(), + const allocator_type& a = allocator_type()) + : table_(n, hf, eql, a) + { + this->insert(f, l); + } + + concurrent_node_set(concurrent_node_set const& rhs) + : table_(rhs.table_, + boost::allocator_select_on_container_copy_construction( + rhs.get_allocator())) + { + } + + concurrent_node_set(concurrent_node_set&& rhs) + : table_(std::move(rhs.table_)) + { + } + + template + concurrent_node_set( + InputIterator f, InputIterator l, allocator_type const& a) + : concurrent_node_set(f, l, 0, hasher(), key_equal(), a) + { + } + + explicit concurrent_node_set(allocator_type const& a) + : table_(detail::foa::default_bucket_count, hasher(), key_equal(), a) + { + } + + concurrent_node_set( + concurrent_node_set const& rhs, allocator_type const& a) + : table_(rhs.table_, a) + { + } + + concurrent_node_set(concurrent_node_set&& rhs, allocator_type const& a) + : table_(std::move(rhs.table_), a) + { + } + + concurrent_node_set(std::initializer_list il, + size_type n = detail::foa::default_bucket_count, + const hasher& hf = hasher(), const key_equal& eql = key_equal(), + const allocator_type& a = allocator_type()) + : concurrent_node_set(n, hf, eql, a) + { + this->insert(il.begin(), il.end()); + } + + concurrent_node_set(size_type n, const allocator_type& a) + : concurrent_node_set(n, hasher(), key_equal(), a) + { + } + + concurrent_node_set( + size_type n, const hasher& hf, const allocator_type& a) + : concurrent_node_set(n, hf, key_equal(), a) + { + } + + template + concurrent_node_set( + InputIterator f, InputIterator l, size_type n, const allocator_type& a) + : concurrent_node_set(f, l, n, hasher(), key_equal(), a) + { + } + + template + concurrent_node_set(InputIterator f, InputIterator l, size_type n, + const hasher& hf, const allocator_type& a) + : concurrent_node_set(f, l, n, hf, key_equal(), a) + { + } + + concurrent_node_set( + std::initializer_list il, const allocator_type& a) + : concurrent_node_set( + il, detail::foa::default_bucket_count, hasher(), key_equal(), a) + { + } + + concurrent_node_set(std::initializer_list il, size_type n, + const allocator_type& a) + : concurrent_node_set(il, n, hasher(), key_equal(), a) + { + } + + concurrent_node_set(std::initializer_list il, size_type n, + const hasher& hf, const allocator_type& a) + : concurrent_node_set(il, n, hf, key_equal(), a) + { + } + + concurrent_node_set( + unordered_node_set&& other) + : table_(std::move(other.table_)) + { + } + + ~concurrent_node_set() = default; + + concurrent_node_set& operator=(concurrent_node_set const& rhs) + { + table_ = rhs.table_; + return *this; + } + + concurrent_node_set& operator=(concurrent_node_set&& rhs) + noexcept(boost::allocator_is_always_equal::type::value || + boost::allocator_propagate_on_container_move_assignment< + Allocator>::type::value) + { + table_ = std::move(rhs.table_); + return *this; + } + + concurrent_node_set& operator=(std::initializer_list ilist) + { + table_ = ilist; + return *this; + } + + /// Capacity + /// + + size_type size() const noexcept { return table_.size(); } + size_type max_size() const noexcept { return table_.max_size(); } + + BOOST_ATTRIBUTE_NODISCARD bool empty() const noexcept + { + return size() == 0; + } + + template + BOOST_FORCEINLINE size_type visit(key_type const& k, F f) + { + BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F) + return table_.visit(k, f); + } + + template + BOOST_FORCEINLINE size_type visit(key_type const& k, F f) const + { + BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F) + return table_.visit(k, f); + } + + template + BOOST_FORCEINLINE size_type cvisit(key_type const& k, F f) const + { + BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F) + return table_.visit(k, f); + } + + template + BOOST_FORCEINLINE typename std::enable_if< + detail::are_transparent::value, size_type>::type + visit(K&& k, F f) + { + BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F) + return table_.visit(std::forward(k), f); + } + + template + BOOST_FORCEINLINE typename std::enable_if< + detail::are_transparent::value, size_type>::type + visit(K&& k, F f) const + { + BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F) + return table_.visit(std::forward(k), f); + } + + template + BOOST_FORCEINLINE typename std::enable_if< + detail::are_transparent::value, size_type>::type + cvisit(K&& k, F f) const + { + BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F) + return table_.visit(std::forward(k), f); + } + + template + BOOST_FORCEINLINE + size_t visit(FwdIterator first, FwdIterator last, F f) + { + BOOST_UNORDERED_STATIC_ASSERT_BULK_VISIT_ITERATOR(FwdIterator) + BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F) + return table_.visit(first, last, f); + } + + template + BOOST_FORCEINLINE + size_t visit(FwdIterator first, FwdIterator last, F f) const + { + BOOST_UNORDERED_STATIC_ASSERT_BULK_VISIT_ITERATOR(FwdIterator) + BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F) + return table_.visit(first, last, f); + } + + template + BOOST_FORCEINLINE + size_t cvisit(FwdIterator first, FwdIterator last, F f) const + { + BOOST_UNORDERED_STATIC_ASSERT_BULK_VISIT_ITERATOR(FwdIterator) + BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F) + return table_.visit(first, last, f); + } + + template size_type visit_all(F f) + { + BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F) + return table_.visit_all(f); + } + + template size_type visit_all(F f) const + { + BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F) + return table_.visit_all(f); + } + + template size_type cvisit_all(F f) const + { + BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F) + return table_.cvisit_all(f); + } + +#if defined(BOOST_UNORDERED_PARALLEL_ALGORITHMS) + template + typename std::enable_if::value, + void>::type + visit_all(ExecPolicy&& p, F f) + { + BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F) + BOOST_UNORDERED_STATIC_ASSERT_EXEC_POLICY(ExecPolicy) + table_.visit_all(p, f); + } + + template + typename std::enable_if::value, + void>::type + visit_all(ExecPolicy&& p, F f) const + { + BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F) + BOOST_UNORDERED_STATIC_ASSERT_EXEC_POLICY(ExecPolicy) + table_.visit_all(p, f); + } + + template + typename std::enable_if::value, + void>::type + cvisit_all(ExecPolicy&& p, F f) const + { + BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F) + BOOST_UNORDERED_STATIC_ASSERT_EXEC_POLICY(ExecPolicy) + table_.cvisit_all(p, f); + } +#endif + + template bool visit_while(F f) + { + BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F) + return table_.visit_while(f); + } + + template bool visit_while(F f) const + { + BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F) + return table_.visit_while(f); + } + + template bool cvisit_while(F f) const + { + BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F) + return table_.cvisit_while(f); + } + +#if defined(BOOST_UNORDERED_PARALLEL_ALGORITHMS) + template + typename std::enable_if::value, + bool>::type + visit_while(ExecPolicy&& p, F f) + { + BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F) + BOOST_UNORDERED_STATIC_ASSERT_EXEC_POLICY(ExecPolicy) + return table_.visit_while(p, f); + } + + template + typename std::enable_if::value, + bool>::type + visit_while(ExecPolicy&& p, F f) const + { + BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F) + BOOST_UNORDERED_STATIC_ASSERT_EXEC_POLICY(ExecPolicy) + return table_.visit_while(p, f); + } + + template + typename std::enable_if::value, + bool>::type + cvisit_while(ExecPolicy&& p, F f) const + { + BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F) + BOOST_UNORDERED_STATIC_ASSERT_EXEC_POLICY(ExecPolicy) + return table_.cvisit_while(p, f); + } +#endif + + /// Modifiers + /// + + BOOST_FORCEINLINE bool insert(value_type const& obj) + { + return table_.insert(obj); + } + + BOOST_FORCEINLINE bool insert(value_type&& obj) + { + return table_.insert(std::move(obj)); + } + + template + BOOST_FORCEINLINE typename std::enable_if< + detail::are_transparent::value, + bool >::type + insert(K&& k) + { + return table_.try_emplace(std::forward(k)); + } + + template + void insert(InputIterator begin, InputIterator end) + { + for (auto pos = begin; pos != end; ++pos) { + table_.emplace(*pos); + } + } + + void insert(std::initializer_list ilist) + { + this->insert(ilist.begin(), ilist.end()); + } + + insert_return_type insert(node_type&& nh) + { + using access = detail::foa::node_handle_access; + + if (nh.empty()) { + return {false, node_type{}}; + } + + // Caveat: get_allocator() incurs synchronization (not cheap) + BOOST_ASSERT(get_allocator() == nh.get_allocator()); + + if (table_.insert(std::move(access::element(nh)))) { + access::reset(nh); + return {true, node_type{}}; + } else { + return {false, std::move(nh)}; + } + } + + template + BOOST_FORCEINLINE bool insert_or_visit(value_type const& obj, F f) + { + BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F) + return table_.insert_or_visit(obj, f); + } + + template + BOOST_FORCEINLINE bool insert_or_visit(value_type&& obj, F f) + { + BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F) + return table_.insert_or_visit(std::move(obj), f); + } + + template + BOOST_FORCEINLINE typename std::enable_if< + detail::are_transparent::value, + bool >::type + insert_or_visit(K&& k, F f) + { + BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F) + return table_.try_emplace_or_visit(std::forward(k), f); + } + + template + void insert_or_visit(InputIterator first, InputIterator last, F f) + { + BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F) + for (; first != last; ++first) { + table_.emplace_or_visit(*first, f); + } + } + + template + void insert_or_visit(std::initializer_list ilist, F f) + { + BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F) + this->insert_or_visit(ilist.begin(), ilist.end(), f); + } + + template + insert_return_type insert_or_visit(node_type&& nh, F f) + { + BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F) + using access = detail::foa::node_handle_access; + + if (nh.empty()) { + return {false, node_type{}}; + } + + // Caveat: get_allocator() incurs synchronization (not cheap) + BOOST_ASSERT(get_allocator() == nh.get_allocator()); + + if (table_.insert_or_visit(std::move(access::element(nh)), f)) { + access::reset(nh); + return {true, node_type{}}; + } else { + return {false, std::move(nh)}; + } + } + + template + BOOST_FORCEINLINE bool insert_or_cvisit(value_type const& obj, F f) + { + BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F) + return table_.insert_or_cvisit(obj, f); + } + + template + BOOST_FORCEINLINE bool insert_or_cvisit(value_type&& obj, F f) + { + BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F) + return table_.insert_or_cvisit(std::move(obj), f); + } + + template + BOOST_FORCEINLINE typename std::enable_if< + detail::are_transparent::value, + bool >::type + insert_or_cvisit(K&& k, F f) + { + BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F) + return table_.try_emplace_or_cvisit(std::forward(k), f); + } + + template + void insert_or_cvisit(InputIterator first, InputIterator last, F f) + { + BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F) + for (; first != last; ++first) { + table_.emplace_or_cvisit(*first, f); + } + } + + template + void insert_or_cvisit(std::initializer_list ilist, F f) + { + BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F) + this->insert_or_cvisit(ilist.begin(), ilist.end(), f); + } + + template + insert_return_type insert_or_cvisit(node_type&& nh, F f) + { + BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F) + using access = detail::foa::node_handle_access; + + if (nh.empty()) { + return {false, node_type{}}; + } + + // Caveat: get_allocator() incurs synchronization (not cheap) + BOOST_ASSERT(get_allocator() == nh.get_allocator()); + + if (table_.insert_or_cvisit(std::move(access::element(nh)), f)) { + access::reset(nh); + return {true, node_type{}}; + } else { + return {false, std::move(nh)}; + } + } + + template BOOST_FORCEINLINE bool emplace(Args&&... args) + { + return table_.emplace(std::forward(args)...); + } + + template + BOOST_FORCEINLINE bool emplace_or_visit(Arg&& arg, Args&&... args) + { + BOOST_UNORDERED_STATIC_ASSERT_LAST_ARG_CONST_INVOCABLE(Arg, Args...) + return table_.emplace_or_visit( + std::forward(arg), std::forward(args)...); + } + + template + BOOST_FORCEINLINE bool emplace_or_cvisit(Arg&& arg, Args&&... args) + { + BOOST_UNORDERED_STATIC_ASSERT_LAST_ARG_CONST_INVOCABLE(Arg, Args...) + return table_.emplace_or_cvisit( + std::forward(arg), std::forward(args)...); + } + + BOOST_FORCEINLINE size_type erase(key_type const& k) + { + return table_.erase(k); + } + + template + BOOST_FORCEINLINE typename std::enable_if< + detail::are_transparent::value, size_type>::type + erase(K&& k) + { + return table_.erase(std::forward(k)); + } + + template + BOOST_FORCEINLINE size_type erase_if(key_type const& k, F f) + { + return table_.erase_if(k, f); + } + + template + BOOST_FORCEINLINE typename std::enable_if< + detail::are_transparent::value && + !detail::is_execution_policy::value, + size_type>::type + erase_if(K&& k, F f) + { + return table_.erase_if(std::forward(k), f); + } + +#if defined(BOOST_UNORDERED_PARALLEL_ALGORITHMS) + template + typename std::enable_if::value, + void>::type + erase_if(ExecPolicy&& p, F f) + { + BOOST_UNORDERED_STATIC_ASSERT_EXEC_POLICY(ExecPolicy) + table_.erase_if(p, f); + } +#endif + + template size_type erase_if(F f) { return table_.erase_if(f); } + + void swap(concurrent_node_set& other) noexcept( + boost::allocator_is_always_equal::type::value || + boost::allocator_propagate_on_container_swap::type::value) + { + return table_.swap(other.table_); + } + + node_type extract(key_type const& key) + { + node_type nh; + table_.extract(key, detail::foa::node_handle_emplacer(nh)); + return nh; + } + + template + typename std::enable_if< + detail::are_transparent::value, node_type>::type + extract(K const& key) + { + node_type nh; + table_.extract(key, detail::foa::node_handle_emplacer(nh)); + return nh; + } + + template + node_type extract_if(key_type const& key, F f) + { + node_type nh; + table_.extract_if(key, f, detail::foa::node_handle_emplacer(nh)); + return nh; + } + + template + typename std::enable_if< + detail::are_transparent::value, node_type>::type + extract_if(K const& key, F f) + { + node_type nh; + table_.extract_if(key, f, detail::foa::node_handle_emplacer(nh)); + return nh; + } + + void clear() noexcept { table_.clear(); } + + template + size_type merge(concurrent_node_set& x) + { + BOOST_ASSERT(get_allocator() == x.get_allocator()); + return table_.merge(x.table_); + } + + template + size_type merge(concurrent_node_set&& x) + { + return merge(x); + } + + BOOST_FORCEINLINE size_type count(key_type const& k) const + { + return table_.count(k); + } + + template + BOOST_FORCEINLINE typename std::enable_if< + detail::are_transparent::value, size_type>::type + count(K const& k) + { + return table_.count(k); + } + + BOOST_FORCEINLINE bool contains(key_type const& k) const + { + return table_.contains(k); + } + + template + BOOST_FORCEINLINE typename std::enable_if< + detail::are_transparent::value, bool>::type + contains(K const& k) const + { + return table_.contains(k); + } + + /// Hash Policy + /// + size_type bucket_count() const noexcept { return table_.capacity(); } + + float load_factor() const noexcept { return table_.load_factor(); } + float max_load_factor() const noexcept + { + return table_.max_load_factor(); + } + void max_load_factor(float) {} + size_type max_load() const noexcept { return table_.max_load(); } + + void rehash(size_type n) { table_.rehash(n); } + void reserve(size_type n) { table_.reserve(n); } + +#if defined(BOOST_UNORDERED_ENABLE_STATS) + /// Stats + /// + stats get_stats() const { return table_.get_stats(); } + + void reset_stats() noexcept { table_.reset_stats(); } +#endif + + /// Observers + /// + allocator_type get_allocator() const noexcept + { + return table_.get_allocator(); + } + + hasher hash_function() const { return table_.hash_function(); } + key_equal key_eq() const { return table_.key_eq(); } + }; + + template + bool operator==( + concurrent_node_set const& lhs, + concurrent_node_set const& rhs) + { + return lhs.table_ == rhs.table_; + } + + template + bool operator!=( + concurrent_node_set const& lhs, + concurrent_node_set const& rhs) + { + return !(lhs == rhs); + } + + template + void swap(concurrent_node_set& x, + concurrent_node_set& y) + noexcept(noexcept(x.swap(y))) + { + x.swap(y); + } + + template + typename concurrent_node_set::size_type erase_if( + concurrent_node_set& c, Predicate pred) + { + return c.table_.erase_if(pred); + } + + template + void serialize( + Archive& ar, concurrent_node_set& c, unsigned int) + { + ar & core::make_nvp("table",c.table_); + } + +#if BOOST_UNORDERED_TEMPLATE_DEDUCTION_GUIDES + + template ::value_type>, + class Pred = + std::equal_to::value_type>, + class Allocator = std::allocator< + typename std::iterator_traits::value_type>, + class = std::enable_if_t >, + class = std::enable_if_t >, + class = std::enable_if_t >, + class = std::enable_if_t > > + concurrent_node_set(InputIterator, InputIterator, + std::size_t = boost::unordered::detail::foa::default_bucket_count, + Hash = Hash(), Pred = Pred(), Allocator = Allocator()) + -> concurrent_node_set< + typename std::iterator_traits::value_type, Hash, Pred, + Allocator>; + + template , + class Pred = std::equal_to, class Allocator = std::allocator, + class = std::enable_if_t >, + class = std::enable_if_t >, + class = std::enable_if_t > > + concurrent_node_set(std::initializer_list, + std::size_t = boost::unordered::detail::foa::default_bucket_count, + Hash = Hash(), Pred = Pred(), Allocator = Allocator()) + -> concurrent_node_set< T, Hash, Pred, Allocator>; + + template >, + class = std::enable_if_t > > + concurrent_node_set(InputIterator, InputIterator, std::size_t, Allocator) + -> concurrent_node_set< + typename std::iterator_traits::value_type, + boost::hash::value_type>, + std::equal_to::value_type>, + Allocator>; + + template >, + class = std::enable_if_t > > + concurrent_node_set(InputIterator, InputIterator, Allocator) + -> concurrent_node_set< + typename std::iterator_traits::value_type, + boost::hash::value_type>, + std::equal_to::value_type>, + Allocator>; + + template >, + class = std::enable_if_t >, + class = std::enable_if_t > > + concurrent_node_set( + InputIterator, InputIterator, std::size_t, Hash, Allocator) + -> concurrent_node_set< + typename std::iterator_traits::value_type, Hash, + std::equal_to::value_type>, + Allocator>; + + template > > + concurrent_node_set(std::initializer_list, std::size_t, Allocator) + -> concurrent_node_set,std::equal_to, Allocator>; + + template > > + concurrent_node_set(std::initializer_list, Allocator) + -> concurrent_node_set, std::equal_to, Allocator>; + + template >, + class = std::enable_if_t > > + concurrent_node_set(std::initializer_list, std::size_t,Hash, Allocator) + -> concurrent_node_set, Allocator>; + +#endif + + } // namespace unordered +} // namespace boost + +#endif // BOOST_UNORDERED_CONCURRENT_NODE_SET_HPP diff --git a/include/boost/unordered/concurrent_node_set_fwd.hpp b/include/boost/unordered/concurrent_node_set_fwd.hpp new file mode 100644 index 00000000..62e06614 --- /dev/null +++ b/include/boost/unordered/concurrent_node_set_fwd.hpp @@ -0,0 +1,67 @@ +/* Fast open-addressing, node-based concurrent hashset. + * + * Copyright 2023 Christian Mazakas. + * Copyright 2023-2024 Joaquin M Lopez Munoz. + * Copyright 2024 Braden Ganetsky. + * Distributed under the Boost Software License, Version 1.0. + * (See accompanying file LICENSE_1_0.txt or copy at + * http://www.boost.org/LICENSE_1_0.txt) + * + * See https://www.boost.org/libs/unordered for library home page. + */ + +#ifndef BOOST_UNORDERED_CONCURRENT_NODE_SET_FWD_HPP +#define BOOST_UNORDERED_CONCURRENT_NODE_SET_FWD_HPP + +#include +#include + +#include +#include + +#ifndef BOOST_NO_CXX17_HDR_MEMORY_RESOURCE +#include +#endif + +namespace boost { + namespace unordered { + + template , + class Pred = std::equal_to, + class Allocator = std::allocator > + class concurrent_node_set; + + template + bool operator==( + concurrent_node_set const& lhs, + concurrent_node_set const& rhs); + + template + bool operator!=( + concurrent_node_set const& lhs, + concurrent_node_set const& rhs); + + template + void swap(concurrent_node_set& x, + concurrent_node_set& y) + noexcept(noexcept(x.swap(y))); + + template + typename concurrent_node_set::size_type erase_if( + concurrent_node_set& c, Predicate pred); + +#ifndef BOOST_NO_CXX17_HDR_MEMORY_RESOURCE + namespace pmr { + template , + class Pred = std::equal_to > + using concurrent_node_set = boost::unordered::concurrent_node_set >; + } // namespace pmr +#endif + + } // namespace unordered + + using boost::unordered::concurrent_node_set; +} // namespace boost + +#endif // BOOST_UNORDERED_CONCURRENT_NODE_SET_FWD_HPP diff --git a/include/boost/unordered/detail/foa/concurrent_table.hpp b/include/boost/unordered/detail/foa/concurrent_table.hpp index 66c25791..43cfceef 100644 --- a/include/boost/unordered/detail/foa/concurrent_table.hpp +++ b/include/boost/unordered/detail/foa/concurrent_table.hpp @@ -397,10 +397,10 @@ inline void swap(atomic_size_control& x,atomic_size_control& y) * - Parallel versions of [c]visit_all(f) and erase_if(f) are provided based * on C++17 stdlib parallel algorithms. * - * Consult boost::concurrent_flat_(map|set) docs for the full API reference. - * Heterogeneous lookup is suported by default, that is, without checking for - * any ::is_transparent typedefs --this checking is done by the wrapping - * containers. + * Consult boost::concurrent_(flat|node)_(map|set) docs for the full API + * reference. Heterogeneous lookup is suported by default, that is, without + * checking for any ::is_transparent typedefs --this checking is done by the + * wrapping containers. * * Thread-safe concurrency is implemented using a two-level lock system: * @@ -724,6 +724,14 @@ public: BOOST_FORCEINLINE bool insert(value_type&& x){return emplace_impl(std::move(x));} + template + BOOST_FORCEINLINE + typename std::enable_if< + !std::is_same::value, + bool + >::type + insert(element_type&& x){return emplace_impl(std::move(x));} + template BOOST_FORCEINLINE bool try_emplace(Key&& x,Args&&... args) { @@ -819,6 +827,30 @@ public: group_shared{},std::forward(f),std::move(x)); } + template + BOOST_FORCEINLINE + typename std::enable_if< + !std::is_same::value, + bool + >::type + insert_or_visit(element_type&& x, F&& f) + { + return emplace_or_visit_impl( + group_exclusive{},std::forward(f),std::move(x)); + } + + template + BOOST_FORCEINLINE + typename std::enable_if< + !std::is_same::value, + bool + >::type + insert_or_cvisit(element_type&& x, F&& f) + { + return emplace_or_visit_impl( + group_shared{},std::forward(f),std::move(x)); + } + template BOOST_FORCEINLINE std::size_t erase(const Key& x) { @@ -889,6 +921,29 @@ public: super::clear(); } + template + BOOST_FORCEINLINE void extract(const Key& x,Extractor&& ext) + { + extract_if( + x,[](const value_type&){return true;},std::forward(ext)); + } + + template + BOOST_FORCEINLINE void extract_if(const Key& x,F&& f,Extractor&& ext) + { + auto lck=shared_access(); + auto hash=this->hash_for(x); + unprotected_internal_visit( + group_exclusive{},x,this->position_for(hash),hash, + [&,this](group_type* pg,unsigned int n,element_type* p) + { + if(f(cast_for(group_exclusive{},type_policy::value_from(*p)))){ + ext(std::move(*p),this->al()); + super::erase(pg,n,p); + } + }); + } + // TODO: should we accept different allocator too? template size_type merge(concurrent_table& x) @@ -1733,7 +1788,8 @@ private: if(this->find(x,pos0,hash))throw_exception(bad_archive_exception()); auto loc=this->unchecked_emplace_at(pos0,hash,std::move(x)); - ar.reset_object_address(std::addressof(*loc.p),std::addressof(x)); + ar.reset_object_address( + std::addressof(type_policy::value_from(*loc.p)),std::addressof(x)); } } @@ -1742,7 +1798,7 @@ private: { using raw_key_type=typename std::remove_const::type; using raw_mapped_type=typename std::remove_const< - typename TypePolicy::mapped_type>::type; + typename type_policy::mapped_type>::type; auto lck=exclusive_access(); std::size_t s; @@ -1766,8 +1822,12 @@ private: if(this->find(k,pos0,hash))throw_exception(bad_archive_exception()); auto loc=this->unchecked_emplace_at(pos0,hash,std::move(k),std::move(m)); - ar.reset_object_address(std::addressof(loc.p->first),std::addressof(k)); - ar.reset_object_address(std::addressof(loc.p->second),std::addressof(m)); + ar.reset_object_address( + std::addressof(type_policy::value_from(*loc.p).first), + std::addressof(k)); + ar.reset_object_address( + std::addressof(type_policy::value_from(*loc.p).second), + std::addressof(m)); } } diff --git a/include/boost/unordered/detail/foa/core.hpp b/include/boost/unordered/detail/foa/core.hpp index b4b91e1d..062b7112 100644 --- a/include/boost/unordered/detail/foa/core.hpp +++ b/include/boost/unordered/detail/foa/core.hpp @@ -1459,6 +1459,11 @@ public: using stats=table_core_stats; #endif +#if defined(BOOST_GCC) +#pragma GCC diagnostic push +#pragma GCC diagnostic ignored "-Wmaybe-uninitialized" +#endif + table_core( std::size_t n=default_bucket_count,const Hash& h_=Hash(), const Pred& pred_=Pred(),const Allocator& al_=Allocator()): @@ -1467,6 +1472,10 @@ public: size_ctrl{initial_max_load(),0} {} +#if defined(BOOST_GCC) +#pragma GCC diagnostic pop +#endif + /* genericize on an ArraysFn so that we can do things like delay an * allocation for the group_access data required by cfoa after the move * constructors of Hash, Pred have been invoked @@ -2081,6 +2090,11 @@ private: using pred_base=empty_value; using allocator_base=empty_value; +#if defined(BOOST_GCC) +#pragma GCC diagnostic push +#pragma GCC diagnostic ignored "-Wmaybe-uninitialized" +#endif + /* used by allocator-extended move ctor */ table_core(Hash&& h_,Pred&& pred_,const Allocator& al_): @@ -2091,6 +2105,10 @@ private: { } +#if defined(BOOST_GCC) +#pragma GCC diagnostic pop +#endif + arrays_type new_arrays(std::size_t n)const { return arrays_type::new_(typename arrays_type::allocator_type(al()),n); diff --git a/include/boost/unordered/detail/foa/node_handle.hpp b/include/boost/unordered/detail/foa/node_handle.hpp index ae1600d8..5af22798 100644 --- a/include/boost/unordered/detail/foa/node_handle.hpp +++ b/include/boost/unordered/detail/foa/node_handle.hpp @@ -1,4 +1,5 @@ /* Copyright 2023 Christian Mazakas. + * Copyright 2024 Joaquin M Lopez Munoz. * Distributed under the Boost Software License, Version 1.0. * (See accompanying file LICENSE_1_0.txt or copy at * http://www.boost.org/LICENSE_1_0.txt) @@ -11,8 +12,11 @@ #include +#include #include +#include #include +#include namespace boost{ namespace unordered{ @@ -27,6 +31,13 @@ struct insert_return_type NodeType node; }; +template +struct iteratorless_insert_return_type +{ + bool inserted; + NodeType node; +}; + template struct node_handle_base { @@ -42,7 +53,27 @@ struct node_handle_base element_type p_; BOOST_ATTRIBUTE_NO_UNIQUE_ADDRESS opt_storage a_; - protected: + friend struct node_handle_access; + + template + void move_assign_allocator_if(node_handle_base&& nh)noexcept + { + move_assign_allocator_if( + std::integral_constant{}, std::move(nh)); + } + + void move_assign_allocator_if( + std::true_type, node_handle_base&& nh)noexcept + { + al()=std::move(nh.al()); + } + + void move_assign_allocator_if( + std::false_type, node_handle_base&&)noexcept + { + } + +protected: node_value_type& data()noexcept { return *(p_.p); @@ -126,9 +157,7 @@ struct node_handle_base BOOST_ASSERT(pocma||al()==nh.al()); type_policy::destroy(al(),&p_); - if(pocma){ - al()=std::move(nh.al()); - } + move_assign_allocator_if(std::move(nh)); p_=std::move(nh.p_); nh.reset(); @@ -153,7 +182,17 @@ struct node_handle_base } } - allocator_type get_allocator()const noexcept{return al();} + allocator_type get_allocator()const + { +#if defined(BOOST_GCC) + /* GCC lifetime analysis incorrectly warns about uninitialized + * allocator object under some circumstances. + */ + if(empty())__builtin_unreachable(); +#endif + return al(); + } + explicit operator bool()const noexcept{ return !empty();} BOOST_ATTRIBUTE_NODISCARD bool empty()const noexcept{return p_.p==nullptr;} @@ -196,6 +235,82 @@ struct node_handle_base } }; +// Internal usage of node_handle_base protected API + +struct node_handle_access +{ + template + using node_type = node_handle_base; + +#if BOOST_WORKAROUND(BOOST_CLANG_VERSION,<190000) + // https://github.com/llvm/llvm-project/issues/25708 + + template + struct element_type_impl + { + using type = typename node_type::element_type; + }; + template + using element_type = typename element_type_impl::type; +#else + template + using element_type = typename node_type::element_type; +#endif + + template + static element_type& + element(node_type& nh)noexcept + { + return nh.element(); + } + + template + static element_type + const& element(node_type const& nh)noexcept + { + return nh.element(); + } + + template + static void emplace( + node_type& nh, + element_type&& x, Allocator a) + { + nh.emplace(std::move(x), a); + } + + template + static void reset(node_type& nh) + { + nh.reset(); + } +}; + +template +class node_handle_emplacer_class +{ + using access = node_handle_access; + using node_type = access::node_type; + using element_type = access::element_type; + + node_type & nh; + +public: + node_handle_emplacer_class(node_type& nh_): nh(nh_) {} + + void operator()(element_type&& x,Allocator a) + { + access::emplace(nh, std::move(x), a); + } +}; + +template +node_handle_emplacer_class +node_handle_emplacer(node_handle_base& nh) +{ + return {nh}; +} + } } } diff --git a/include/boost/unordered/detail/foa/node_map_handle.hpp b/include/boost/unordered/detail/foa/node_map_handle.hpp new file mode 100644 index 00000000..8df92278 --- /dev/null +++ b/include/boost/unordered/detail/foa/node_map_handle.hpp @@ -0,0 +1,56 @@ +/* Copyright 2023 Christian Mazakas. + * Copyright 2024 Joaquin M Lopez Munoz. + * Distributed under the Boost Software License, Version 1.0. + * (See accompanying file LICENSE_1_0.txt or copy at + * http://www.boost.org/LICENSE_1_0.txt) + * + * See https://www.boost.org/libs/unordered for library home page. + */ + +#ifndef BOOST_UNORDERED_DETAIL_FOA_NODE_MAP_HANDLE_HPP +#define BOOST_UNORDERED_DETAIL_FOA_NODE_MAP_HANDLE_HPP + +#include + +namespace boost{ +namespace unordered{ +namespace detail{ +namespace foa{ + +template +struct node_map_handle + : public node_handle_base +{ +private: + using base_type = node_handle_base; + + using typename base_type::type_policy; + +public: + using key_type = typename TypePolicy::key_type; + using mapped_type = typename TypePolicy::mapped_type; + + constexpr node_map_handle() noexcept = default; + node_map_handle(node_map_handle&& nh) noexcept = default; + + node_map_handle& operator=(node_map_handle&&) noexcept = default; + + key_type& key() const + { + BOOST_ASSERT(!this->empty()); + return const_cast(this->data().first); + } + + mapped_type& mapped() const + { + BOOST_ASSERT(!this->empty()); + return const_cast(this->data().second); + } +}; + +} +} +} +} + +#endif // BOOST_UNORDERED_DETAIL_FOA_NODE_MAP_HANDLE_HPP diff --git a/include/boost/unordered/detail/foa/node_set_handle.hpp b/include/boost/unordered/detail/foa/node_set_handle.hpp new file mode 100644 index 00000000..c3bc065b --- /dev/null +++ b/include/boost/unordered/detail/foa/node_set_handle.hpp @@ -0,0 +1,48 @@ +/* Copyright 2023 Christian Mazakas. + * Copyright 2024 Joaquin M Lopez Munoz. + * Distributed under the Boost Software License, Version 1.0. + * (See accompanying file LICENSE_1_0.txt or copy at + * http://www.boost.org/LICENSE_1_0.txt) + * + * See https://www.boost.org/libs/unordered for library home page. + */ + +#ifndef BOOST_UNORDERED_DETAIL_FOA_NODE_SET_HANDLE_HPP +#define BOOST_UNORDERED_DETAIL_FOA_NODE_SET_HANDLE_HPP + +#include + +namespace boost{ +namespace unordered{ +namespace detail{ +namespace foa{ + +template +struct node_set_handle + : public detail::foa::node_handle_base +{ +private: + using base_type = detail::foa::node_handle_base; + + using typename base_type::type_policy; + +public: + using value_type = typename TypePolicy::value_type; + + constexpr node_set_handle() noexcept = default; + node_set_handle(node_set_handle&& nh) noexcept = default; + node_set_handle& operator=(node_set_handle&&) noexcept = default; + + value_type& value() const + { + BOOST_ASSERT(!this->empty()); + return const_cast(this->data()); + } +}; + +} +} +} +} + +#endif // BOOST_UNORDERED_DETAIL_FOA_NODE_SET_HANDLE_HPP diff --git a/include/boost/unordered/unordered_node_map.hpp b/include/boost/unordered/unordered_node_map.hpp index def251fa..56be52bc 100644 --- a/include/boost/unordered/unordered_node_map.hpp +++ b/include/boost/unordered/unordered_node_map.hpp @@ -1,4 +1,5 @@ // Copyright (C) 2022-2023 Christian Mazakas +// Copyright (C) 2024 Joaquin M Lopez Munoz // Distributed under the Boost Software License, Version 1.0. (See accompanying // file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt) @@ -10,8 +11,8 @@ #pragma once #endif -#include -#include +#include +#include #include #include #include @@ -36,45 +37,13 @@ namespace boost { #pragma warning(disable : 4714) /* marked as __forceinline not inlined */ #endif - namespace detail { - template - struct node_map_handle - : public detail::foa::node_handle_base - { - private: - using base_type = detail::foa::node_handle_base; - - using typename base_type::type_policy; - - template - friend class boost::unordered::unordered_node_map; - - public: - using key_type = typename TypePolicy::key_type; - using mapped_type = typename TypePolicy::mapped_type; - - constexpr node_map_handle() noexcept = default; - node_map_handle(node_map_handle&& nh) noexcept = default; - - node_map_handle& operator=(node_map_handle&&) noexcept = default; - - key_type& key() const - { - BOOST_ASSERT(!this->empty()); - return const_cast(this->data().first); - } - - mapped_type& mapped() const - { - BOOST_ASSERT(!this->empty()); - return const_cast(this->data().second); - } - }; - } // namespace detail - template class unordered_node_map { + template + friend class concurrent_node_map; + using map_types = detail::foa::node_map_types::type>; @@ -109,7 +78,7 @@ namespace boost { typename boost::allocator_const_pointer::type; using iterator = typename table_type::iterator; using const_iterator = typename table_type::const_iterator; - using node_type = detail::node_map_handle::type>; using insert_return_type = @@ -220,6 +189,12 @@ namespace boost { { } + unordered_node_map( + concurrent_node_map&& other) + : table_(std::move(other.table_)) + { + } + ~unordered_node_map() = default; unordered_node_map& operator=(unordered_node_map const& other) @@ -307,15 +282,17 @@ namespace boost { insert_return_type insert(node_type&& nh) { + using access = detail::foa::node_handle_access; + if (nh.empty()) { return {end(), false, node_type{}}; } BOOST_ASSERT(get_allocator() == nh.get_allocator()); - auto itp = table_.insert(std::move(nh.element())); + auto itp = table_.insert(std::move(access::element(nh))); if (itp.second) { - nh.reset(); + access::reset(nh); return {itp.first, true, node_type{}}; } else { return {itp.first, false, std::move(nh)}; @@ -324,15 +301,17 @@ namespace boost { iterator insert(const_iterator, node_type&& nh) { + using access = detail::foa::node_handle_access; + if (nh.empty()) { return end(); } BOOST_ASSERT(get_allocator() == nh.get_allocator()); - auto itp = table_.insert(std::move(nh.element())); + auto itp = table_.insert(std::move(access::element(nh))); if (itp.second) { - nh.reset(); + access::reset(nh); return itp.first; } else { return itp.first; @@ -507,7 +486,8 @@ namespace boost { BOOST_ASSERT(pos != end()); node_type nh; auto elem = table_.extract(pos); - nh.emplace(std::move(elem), get_allocator()); + detail::foa::node_handle_emplacer(nh)( + std::move(elem), get_allocator()); return nh; } diff --git a/include/boost/unordered/unordered_node_set.hpp b/include/boost/unordered/unordered_node_set.hpp index e5e14115..bc14ffb0 100644 --- a/include/boost/unordered/unordered_node_set.hpp +++ b/include/boost/unordered/unordered_node_set.hpp @@ -1,4 +1,5 @@ // Copyright (C) 2022-2023 Christian Mazakas +// Copyright (C) 2024 Joaquin M Lopez Munoz // Distributed under the Boost Software License, Version 1.0. (See accompanying // file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt) @@ -10,8 +11,9 @@ #pragma once #endif +#include #include -#include +#include #include #include #include @@ -35,37 +37,12 @@ namespace boost { #pragma warning(disable : 4714) /* marked as __forceinline not inlined */ #endif - namespace detail { - template - struct node_set_handle - : public detail::foa::node_handle_base - { - private: - using base_type = detail::foa::node_handle_base; - - using typename base_type::type_policy; - - template - friend class boost::unordered::unordered_node_set; - - public: - using value_type = typename TypePolicy::value_type; - - constexpr node_set_handle() noexcept = default; - node_set_handle(node_set_handle&& nh) noexcept = default; - node_set_handle& operator=(node_set_handle&&) noexcept = default; - - value_type& value() const - { - BOOST_ASSERT(!this->empty()); - return const_cast(this->data()); - } - }; - } // namespace detail - template class unordered_node_set { + template + friend class concurrent_node_set; + using set_types = detail::foa::node_set_types::type>; @@ -99,7 +76,7 @@ namespace boost { typename boost::allocator_const_pointer::type; using iterator = typename table_type::iterator; using const_iterator = typename table_type::const_iterator; - using node_type = detail::node_set_handle::type>; using insert_return_type = @@ -210,6 +187,12 @@ namespace boost { { } + unordered_node_set( + concurrent_node_set&& other) + : table_(std::move(other.table_)) + { + } + ~unordered_node_set() = default; unordered_node_set& operator=(unordered_node_set const& other) @@ -312,15 +295,17 @@ namespace boost { insert_return_type insert(node_type&& nh) { + using access = detail::foa::node_handle_access; + if (nh.empty()) { return {end(), false, node_type{}}; } BOOST_ASSERT(get_allocator() == nh.get_allocator()); - auto itp = table_.insert(std::move(nh.element())); + auto itp = table_.insert(std::move(access::element(nh))); if (itp.second) { - nh.reset(); + access::reset(nh); return {itp.first, true, node_type{}}; } else { return {itp.first, false, std::move(nh)}; @@ -329,15 +314,17 @@ namespace boost { iterator insert(const_iterator, node_type&& nh) { + using access = detail::foa::node_handle_access; + if (nh.empty()) { return end(); } BOOST_ASSERT(get_allocator() == nh.get_allocator()); - auto itp = table_.insert(std::move(nh.element())); + auto itp = table_.insert(std::move(access::element(nh))); if (itp.second) { - nh.reset(); + access::reset(nh); return itp.first; } else { return itp.first; @@ -395,7 +382,8 @@ namespace boost { BOOST_ASSERT(pos != end()); node_type nh; auto elem = table_.extract(pos); - nh.emplace(std::move(elem), get_allocator()); + detail::foa::node_handle_emplacer(nh)( + std::move(elem), get_allocator()); return nh; } diff --git a/test/CMakeLists.txt b/test/CMakeLists.txt index 0f228ac0..7347173d 100644 --- a/test/CMakeLists.txt +++ b/test/CMakeLists.txt @@ -135,7 +135,7 @@ foa_tests(SOURCES exception/merge_exception_tests.cpp) # CFOA tests -cfoa_tests(SOURCES cfoa/insert_tests.cpp) +cfoa_tests(SOURCES cfoa/insert_tests.cpp COMPILE_OPTIONS $<$:/bigobj>) cfoa_tests(SOURCES cfoa/erase_tests.cpp) cfoa_tests(SOURCES cfoa/try_emplace_tests.cpp) cfoa_tests(SOURCES cfoa/emplace_tests.cpp) diff --git a/test/Jamfile.v2 b/test/Jamfile.v2 index 440f533b..3edc39ca 100644 --- a/test/Jamfile.v2 +++ b/test/Jamfile.v2 @@ -110,6 +110,7 @@ local FCA_TESTS = move_tests narrow_cast_tests node_handle_tests + node_handle_allocator_tests noexcept_tests post_move_tests prime_fmod_tests @@ -232,6 +233,7 @@ local FOA_TESTS = fancy_pointer_noleak pmr_allocator_tests stats_tests + node_handle_allocator_tests ; for local test in $(FOA_TESTS) @@ -308,11 +310,10 @@ alias foa_tests : ; local CFOA_TESTS = - insert_tests erase_tests try_emplace_tests emplace_tests - visit_tests + extract_insert_tests constructor_tests assign_tests clear_tests @@ -338,6 +339,7 @@ local CFOA_TESTS = explicit_alloc_ctor_tests pmr_allocator_tests stats_tests + node_handle_allocator_tests ; for local test in $(CFOA_TESTS) @@ -348,6 +350,28 @@ for local test in $(CFOA_TESTS) ; } +run cfoa/insert_tests.cpp + : + : + : $(CPP11) multi + msvc:/bigobj + gcc:on + gcc:space + clang:on + clang:space + : cfoa_insert_tests ; + +run cfoa/visit_tests.cpp + : + : + : $(CPP11) multi + msvc:/bigobj + gcc:on + gcc:space + clang:on + clang:space + : cfoa_visit_tests ; + run cfoa/serialization_tests.cpp : : @@ -383,6 +407,8 @@ make_cfoa_interprocess_concurrency_tests cfoa_interproc_conc_tests_stats alias cfoa_tests : cfoa_$(CFOA_TESTS) + cfoa_insert_tests + cfoa_visit_tests cfoa_serialization_tests cfoa_interproc_conc_tests cfoa_interproc_conc_tests_stats ; diff --git a/test/cfoa/assign_tests.cpp b/test/cfoa/assign_tests.cpp index f201e9da..ea9403c2 100644 --- a/test/cfoa/assign_tests.cpp +++ b/test/cfoa/assign_tests.cpp @@ -1,5 +1,5 @@ // Copyright (C) 2023 Christian Mazakas -// Copyright (C) 2023 Joaquin M Lopez Munoz +// Copyright (C) 2023-2024 Joaquin M Lopez Munoz // Distributed under the Boost Software License, Version 1.0. (See accompanying // file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt) @@ -7,6 +7,8 @@ #include #include +#include +#include #if defined(__clang__) && defined(__has_warning) @@ -37,19 +39,35 @@ using key_equal = stateful_key_equal; using map_type = boost::unordered::concurrent_flat_map > >; +using node_map_type = boost::unordered::concurrent_node_map > >; + using set_type = boost::unordered::concurrent_flat_set >; +using node_set_type = boost::unordered::concurrent_node_set >; + using fancy_map_type = boost::unordered::concurrent_flat_map > >; +using fancy_node_map_type = boost::unordered::concurrent_node_map > >; + using fancy_set_type = boost::unordered::concurrent_flat_set >; +using fancy_node_set_type = boost::unordered::concurrent_node_set >; + map_type* test_map; +node_map_type* test_node_map; set_type* test_set; +node_set_type* test_node_set; fancy_map_type* fancy_test_map; +fancy_node_map_type* fancy_test_node_map; fancy_set_type* fancy_test_set; +fancy_node_set_type* fancy_test_node_set; std::initializer_list map_init_list{ {raii{0}, raii{0}}, @@ -102,7 +120,9 @@ std::initializer_list set_init_list{ }; auto test_map_and_init_list=std::make_pair(test_map,map_init_list); +auto test_node_map_and_init_list=std::make_pair(test_node_map,map_init_list); auto test_set_and_init_list=std::make_pair(test_set,set_init_list); +auto test_node_set_and_init_list=std::make_pair(test_node_set,set_init_list); template struct poca_allocator: fancy_allocator @@ -928,7 +948,7 @@ namespace { } template - void flat_move_assign(X*, GF gen_factory, test::random_generator rg) + void nonconcurrent_move_assign(X*, GF gen_factory, test::random_generator rg) { using value_type = typename X::value_type; static constexpr auto value_type_cardinality = @@ -950,16 +970,17 @@ namespace { { raii::reset_counts(); - flat_container flat(values.begin(), values.end(), values.size(), + nonconcurrent_container nonc( + values.begin(), values.end(), values.size(), hasher(1), key_equal(2), allocator_type(3)); X x(0, hasher(2), key_equal(1), allocator_type(3)); - BOOST_TEST(flat.get_allocator() == x.get_allocator()); + BOOST_TEST(nonc.get_allocator() == x.get_allocator()); - x = std::move(flat); + x = std::move(nonc); - BOOST_TEST(flat.empty()); + BOOST_TEST(nonc.empty()); BOOST_TEST_EQ(x.size(), reference_cont.size()); test_fuzzy_matches_reference(x, reference_cont, rg); @@ -983,17 +1004,18 @@ namespace { X x(values.begin(), values.end(), values.size(), hasher(1), key_equal(2), allocator_type(3)); - flat_container flat(0, hasher(2), key_equal(1), allocator_type(3)); + nonconcurrent_container nonc( + 0, hasher(2), key_equal(1), allocator_type(3)); - BOOST_TEST(flat.get_allocator() == x.get_allocator()); + BOOST_TEST(nonc.get_allocator() == x.get_allocator()); - flat = std::move(x); + nonc = std::move(x); BOOST_TEST(x.empty()); - BOOST_TEST_EQ(flat.size(), reference_cont.size()); + BOOST_TEST_EQ(nonc.size(), reference_cont.size()); - BOOST_TEST_EQ(flat.hash_function(), hasher(1)); - BOOST_TEST_EQ(flat.key_eq(), key_equal(2)); + BOOST_TEST_EQ(nonc.hash_function(), hasher(1)); + BOOST_TEST_EQ(nonc.key_eq(), key_equal(2)); BOOST_TEST_EQ( raii::copy_constructor, value_type_cardinality * reference_cont.size()); @@ -1008,16 +1030,17 @@ namespace { { raii::reset_counts(); - flat_container flat(values.begin(), values.end(), values.size(), + nonconcurrent_container nonc( + values.begin(), values.end(), values.size(), hasher(1), key_equal(2), allocator_type(3)); X x(0, hasher(2), key_equal(1), allocator_type(4)); - BOOST_TEST(flat.get_allocator() != x.get_allocator()); + BOOST_TEST(nonc.get_allocator() != x.get_allocator()); - x = std::move(flat); + x = std::move(nonc); - BOOST_TEST(flat.empty()); + BOOST_TEST(nonc.empty()); BOOST_TEST_EQ(x.size(), reference_cont.size()); test_fuzzy_matches_reference(x, reference_cont, rg); @@ -1043,17 +1066,18 @@ namespace { X x(values.begin(), values.end(), values.size(), hasher(1), key_equal(2), allocator_type(3)); - flat_container flat(0, hasher(2), key_equal(1), allocator_type(4)); + nonconcurrent_container nonc( + 0, hasher(2), key_equal(1), allocator_type(4)); - BOOST_TEST(flat.get_allocator() != x.get_allocator()); + BOOST_TEST(nonc.get_allocator() != x.get_allocator()); - flat = std::move(x); + nonc = std::move(x); BOOST_TEST(x.empty()); - BOOST_TEST_EQ(flat.size(), reference_cont.size()); + BOOST_TEST_EQ(nonc.size(), reference_cont.size()); - BOOST_TEST_EQ(flat.hash_function(), hasher(1)); - BOOST_TEST_EQ(flat.key_eq(), key_equal(2)); + BOOST_TEST_EQ(nonc.hash_function(), hasher(1)); + BOOST_TEST_EQ(nonc.key_eq(), key_equal(2)); BOOST_TEST_EQ( raii::copy_constructor, value_type_cardinality * reference_cont.size()); @@ -1073,29 +1097,31 @@ namespace { // clang-format off UNORDERED_TEST( copy_assign, - ((test_map)(test_set)) + ((test_map)(test_node_map)(test_set)(test_node_set)) ((value_type_generator_factory)) ((default_generator)(sequential)(limited_range))) UNORDERED_TEST( move_assign, - ((test_map)(test_set)) + ((test_map)(test_node_map)(test_set)(test_node_set)) ((value_type_generator_factory)) ((default_generator)(sequential)(limited_range))) UNORDERED_TEST( initializer_list_assign, - ((test_map_and_init_list)(test_set_and_init_list))) + ((test_map_and_init_list)(test_node_map_and_init_list) + (test_set_and_init_list)(test_node_set_and_init_list))) UNORDERED_TEST( insert_and_assign, - ((test_map)(test_set)) + ((test_map)(test_node_map)(test_set)(test_node_set)) ((init_type_generator_factory)) ((default_generator)(sequential)(limited_range))) UNORDERED_TEST( - flat_move_assign, - ((test_map)(test_set)(fancy_test_map)(fancy_test_set)) + nonconcurrent_move_assign, + ((test_map)(test_node_map)(test_set)(test_node_set) + (fancy_test_map)(fancy_test_node_map)(fancy_test_set)(fancy_test_node_set)) ((init_type_generator_factory)) ((default_generator)(sequential)(limited_range))) // clang-format on diff --git a/test/cfoa/clear_tests.cpp b/test/cfoa/clear_tests.cpp index 4a00d08a..f1ff7849 100644 --- a/test/cfoa/clear_tests.cpp +++ b/test/cfoa/clear_tests.cpp @@ -1,5 +1,5 @@ // Copyright (C) 2023 Christian Mazakas -// Copyright (C) 2023 Joaquin M Lopez Munoz +// Copyright (C) 2023-2024 Joaquin M Lopez Munoz // Distributed under the Boost Software License, Version 1.0. (See accompanying // file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt) @@ -7,6 +7,8 @@ #include #include +#include +#include test::seed_t initialize_seed{674140082}; @@ -20,11 +22,19 @@ using key_equal = stateful_key_equal; using map_type = boost::unordered::concurrent_flat_map > >; +using node_map_type = boost::unordered::concurrent_node_map > >; + using set_type = boost::unordered::concurrent_flat_set >; +using node_set_type = boost::unordered::concurrent_node_set >; + map_type* test_map; +node_map_type* test_node_map; set_type* test_set; +node_set_type* test_node_set; namespace { template @@ -130,12 +140,12 @@ namespace { // clang-format off UNORDERED_TEST( clear_tests, - ((test_map)(test_set)) + ((test_map)(test_node_map)(test_set)(test_node_set)) ((value_type_generator_factory)) ((default_generator)(sequential)(limited_range))) UNORDERED_TEST(insert_and_clear, - ((test_map)(test_set)) + ((test_map)(test_node_map)(test_set)(test_node_set)) ((value_type_generator_factory)) ((default_generator)(sequential)(limited_range))) // clang-format on diff --git a/test/cfoa/common_helpers.hpp b/test/cfoa/common_helpers.hpp index a8cb7f85..e315efa9 100644 --- a/test/cfoa/common_helpers.hpp +++ b/test/cfoa/common_helpers.hpp @@ -1,15 +1,19 @@ // Copyright (C) 2023 Christian Mazakas -// Copyright (C) 2023 Joaquin M Lopez Munoz +// Copyright (C) 2023-2024 Joaquin M Lopez Munoz // Distributed under the Boost Software License, Version 1.0. (See accompanying // file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt) - + #ifndef BOOST_UNORDERED_TEST_CFOA_COMMON_HELPERS_HPP #define BOOST_UNORDERED_TEST_CFOA_COMMON_HELPERS_HPP #include #include +#include +#include #include #include +#include +#include #include #include @@ -27,6 +31,31 @@ struct value_cardinality > static constexpr std::size_t value=2; }; +template +struct value_nonconst_cardinality +{ + static constexpr std::size_t value=1; +}; + +template +struct value_nonconst_cardinality > +{ + static constexpr std::size_t value= + 1 * !std::is_const::value + + 1 * !std::is_const::value ; +}; + +template +struct is_container_node_based: std::false_type {}; + +template +struct is_container_node_based > + : std::true_type {}; + +template +struct is_container_node_based > + : std::true_type {}; + template struct reference_container_impl; @@ -39,30 +68,55 @@ struct reference_container_impl > using type = boost::unordered_flat_map; }; +template +struct reference_container_impl > +{ + using type = boost::unordered_node_map; +}; + template struct reference_container_impl > { using type = boost::unordered_flat_set; }; -template -struct flat_container_impl; +template +struct reference_container_impl > +{ + using type = boost::unordered_node_set; +}; template -using flat_container = typename flat_container_impl::type; +struct nonconcurrent_container_impl; + +template +using nonconcurrent_container = + typename nonconcurrent_container_impl::type; template -struct flat_container_impl > +struct nonconcurrent_container_impl > { using type = boost::unordered_flat_map; }; +template +struct nonconcurrent_container_impl > +{ + using type = boost::unordered_node_map; +}; + template -struct flat_container_impl > +struct nonconcurrent_container_impl > { using type = boost::unordered_flat_set; }; +template +struct nonconcurrent_container_impl > +{ + using type = boost::unordered_node_set; +}; + template class Allocator> struct replace_allocator_impl; @@ -95,7 +149,33 @@ struct replace_allocator_impl< using type = boost::concurrent_flat_set >; }; - + +template < + typename K, typename V, typename H, typename P, typename A, + template class Allocator +> +struct replace_allocator_impl< + boost::concurrent_node_map, Allocator> +{ + using value_type = + typename boost::concurrent_node_map::value_type; + using type = + boost::concurrent_node_map >; +}; + +template < + typename K, typename H, typename P, typename A, + template class Allocator +> +struct replace_allocator_impl< + boost::concurrent_node_set, Allocator> +{ + using value_type = + typename boost::concurrent_node_set::value_type; + using type = + boost::concurrent_node_set >; +}; + template K const& get_key(K const& x) { return x; } diff --git a/test/cfoa/constructor_tests.cpp b/test/cfoa/constructor_tests.cpp index aa2d0caf..2d8c5f95 100644 --- a/test/cfoa/constructor_tests.cpp +++ b/test/cfoa/constructor_tests.cpp @@ -1,5 +1,5 @@ // Copyright (C) 2023 Christian Mazakas -// Copyright (C) 2023 Joaquin M Lopez Munoz +// Copyright (C) 2023-2024 Joaquin M Lopez Munoz // Distributed under the Boost Software License, Version 1.0. (See accompanying // file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt) @@ -7,6 +7,8 @@ #include #include +#include +#include test::seed_t initialize_seed(4122023); @@ -52,11 +54,19 @@ using key_equal = stateful_key_equal; using map_type = boost::unordered::concurrent_flat_map > >; +using node_map_type = boost::unordered::concurrent_node_map > >; + using set_type = boost::unordered::concurrent_flat_set >; +using node_set_type = boost::unordered::concurrent_node_set >; + map_type* test_map; +node_map_type* test_node_map; set_type* test_set; +node_set_type* test_node_set; std::initializer_list map_init_list{ {raii{0}, raii{0}}, @@ -109,7 +119,9 @@ std::initializer_list set_init_list{ }; auto test_map_and_init_list=std::make_pair(test_map,map_init_list); +auto test_node_map_and_init_list=std::make_pair(test_node_map,map_init_list); auto test_set_and_init_list=std::make_pair(test_set,set_init_list); +auto test_node_set_and_init_list=std::make_pair(test_node_set,set_init_list); namespace { template @@ -865,7 +877,7 @@ namespace { } template - void flat_constructor(X*, GF gen_factory, test::random_generator rg) + void nonconcurrent_constructor(X*, GF gen_factory, test::random_generator rg) { using value_type = typename X::value_type; static constexpr auto value_type_cardinality = @@ -875,12 +887,13 @@ namespace { auto gen = gen_factory.template get(); auto values = make_random_values(1024 * 16, [&] { return gen(rg); }); auto reference_cont = reference_container(values.begin(), values.end()); - auto reference_flat= flat_container(values.begin(), values.end()); + auto reference_nonc = + nonconcurrent_container(values.begin(), values.end()); raii::reset_counts(); { - flat_container flat( + nonconcurrent_container nonc( values.begin(), values.end(), reference_cont.size(), hasher(1), key_equal(2), allocator_type(3)); @@ -890,9 +903,9 @@ namespace { BOOST_TEST_EQ(old_dc, 0u); BOOST_TEST_EQ(old_mc, 0u); - BOOST_TEST_EQ(old_cc, value_type_cardinality * flat.size()); + BOOST_TEST_EQ(old_cc, value_type_cardinality * nonc.size()); - X x(std::move(flat)); + X x(std::move(nonc)); test_fuzzy_matches_reference(x, reference_cont, rg); @@ -904,15 +917,16 @@ namespace { BOOST_TEST_EQ(x.key_eq(), key_equal(2)); BOOST_TEST(x.get_allocator() == allocator_type(3)); - BOOST_TEST(flat.empty()); + BOOST_TEST(nonc.empty()); } check_raii_counts(); { - flat_container flat(0, hasher(1), key_equal(2), allocator_type(3)); + nonconcurrent_container nonc( + 0, hasher(1), key_equal(2), allocator_type(3)); - X x(std::move(flat)); + X x(std::move(nonc)); BOOST_TEST(x.empty()); @@ -920,7 +934,7 @@ namespace { BOOST_TEST_EQ(x.key_eq(), key_equal(2)); BOOST_TEST(x.get_allocator() == allocator_type(3)); - BOOST_TEST(flat.empty()); + BOOST_TEST(nonc.empty()); } check_raii_counts(); @@ -937,17 +951,17 @@ namespace { BOOST_TEST_EQ(old_mc, 0u); BOOST_TEST_EQ(old_cc, 2u * value_type_cardinality * x.size()); - flat_container flat(std::move(x)); + nonconcurrent_container nonc(std::move(x)); - BOOST_TEST(flat == reference_flat); + BOOST_TEST(nonc == reference_nonc); BOOST_TEST_EQ(+raii::default_constructor, old_dc); BOOST_TEST_EQ(+raii::move_constructor, old_mc); BOOST_TEST_EQ(+raii::copy_constructor, old_cc); - BOOST_TEST_EQ(flat.hash_function(), hasher(1)); - BOOST_TEST_EQ(flat.key_eq(), key_equal(2)); - BOOST_TEST(flat.get_allocator() == allocator_type(3)); + BOOST_TEST_EQ(nonc.hash_function(), hasher(1)); + BOOST_TEST_EQ(nonc.key_eq(), key_equal(2)); + BOOST_TEST(nonc.get_allocator() == allocator_type(3)); BOOST_TEST(x.empty()); } @@ -957,13 +971,13 @@ namespace { { X x(0, hasher(1), key_equal(2), allocator_type(3)); - flat_container flat(std::move(x)); + nonconcurrent_container nonc(std::move(x)); - BOOST_TEST(flat.empty()); + BOOST_TEST(nonc.empty()); - BOOST_TEST_EQ(flat.hash_function(), hasher(1)); - BOOST_TEST_EQ(flat.key_eq(), key_equal(2)); - BOOST_TEST(flat.get_allocator() == allocator_type(3)); + BOOST_TEST_EQ(nonc.hash_function(), hasher(1)); + BOOST_TEST_EQ(nonc.key_eq(), key_equal(2)); + BOOST_TEST(nonc.get_allocator() == allocator_type(3)); BOOST_TEST(x.empty()); } @@ -976,83 +990,84 @@ namespace { // clang-format off UNORDERED_TEST( default_constructor, - ((test_map)(test_set))) + ((test_map)(test_node_map)(test_set)(test_node_set))) UNORDERED_TEST( bucket_count_with_hasher_key_equal_and_allocator, - ((test_map)(test_set))) + ((test_map)(test_node_map)(test_set)(test_node_set))) UNORDERED_TEST( soccc, - ((test_map)(test_set))) + ((test_map)(test_node_map)(test_set)(test_node_set))) UNORDERED_TEST( from_iterator_range, - ((test_map)(test_set)) + ((test_map)(test_node_map)(test_set)(test_node_set)) ((value_type_generator_factory)) ((default_generator)(sequential)(limited_range))) UNORDERED_TEST( copy_constructor, - ((test_map)(test_set)) + ((test_map)(test_node_map)(test_set)(test_node_set)) ((value_type_generator_factory)) ((default_generator)(sequential)(limited_range))) UNORDERED_TEST( copy_constructor_with_insertion, - ((test_map)(test_set)) + ((test_map)(test_node_map)(test_set)(test_node_set)) ((value_type_generator_factory)) ((default_generator)(sequential)(limited_range))) UNORDERED_TEST( move_constructor, - ((test_map)(test_set)) + ((test_map)(test_node_map)(test_set)(test_node_set)) ((value_type_generator_factory)) ((default_generator)(sequential)(limited_range))) UNORDERED_TEST( move_constructor_with_insertion, - ((test_map)(test_set)) + ((test_map)(test_node_map)(test_set)(test_node_set)) ((value_type_generator_factory)) ((default_generator)(sequential)(limited_range))) UNORDERED_TEST( iterator_range_with_allocator, - ((test_map)(test_set)) + ((test_map)(test_node_map)(test_set)(test_node_set)) ((value_type_generator_factory)) ((default_generator)(sequential)(limited_range))) UNORDERED_TEST( explicit_allocator, - ((test_map)(test_set))) + ((test_map)(test_node_map)(test_set)(test_node_set))) UNORDERED_TEST( initializer_list_with_all_params, - ((test_map_and_init_list)(test_set_and_init_list))) + ((test_map_and_init_list)(test_node_map_and_init_list) + (test_set_and_init_list)(test_node_set_and_init_list))) UNORDERED_TEST( bucket_count_and_allocator, - ((test_map)(test_set))) + ((test_map)(test_node_map)(test_set)(test_node_set))) UNORDERED_TEST( bucket_count_with_hasher_and_allocator, - ((test_map)(test_set))) + ((test_map)(test_node_map)(test_set)(test_node_set))) UNORDERED_TEST( iterator_range_with_bucket_count_and_allocator, - ((test_map)(test_set)) + ((test_map)(test_node_map)(test_set)(test_node_set)) ((value_type_generator_factory)) ((default_generator)(sequential)(limited_range))) UNORDERED_TEST( iterator_range_with_bucket_count_hasher_and_allocator, - ((test_map)(test_set)) + ((test_map)(test_node_map)(test_set)(test_node_set)) ((value_type_generator_factory)) ((default_generator)(sequential)(limited_range))) UNORDERED_TEST( - flat_constructor, - ((test_map)(test_set)) + nonconcurrent_constructor, + ((test_map)(test_node_map)(test_set)(test_node_set)) ((value_type_generator_factory)) ((default_generator)(sequential)(limited_range))) // clang-format on diff --git a/test/cfoa/emplace_tests.cpp b/test/cfoa/emplace_tests.cpp index f3a2b926..a7b7682f 100644 --- a/test/cfoa/emplace_tests.cpp +++ b/test/cfoa/emplace_tests.cpp @@ -1,5 +1,5 @@ // Copyright (C) 2023 Christian Mazakas -// Copyright (C) 2023 Joaquin M Lopez Munoz +// Copyright (C) 2023-2024 Joaquin M Lopez Munoz // Copyright (C) 2024 Braden Ganetsky // Distributed under the Boost Software License, Version 1.0. (See accompanying // file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt) @@ -9,6 +9,8 @@ #include #include +#include +#include #include @@ -80,11 +82,8 @@ namespace { } template void operator()(std::vector& values, X& x) { - static constexpr auto value_type_cardinality = - value_cardinality::value; - call_impl(values, x); - BOOST_TEST_GE(raii::move_constructor, value_type_cardinality * x.size()); + BOOST_TEST_GE(raii::move_constructor, x.size()); } } lvalue_emplacer; @@ -197,7 +196,12 @@ namespace { BOOST_TEST_EQ(raii::default_constructor, 0u); BOOST_TEST_EQ(raii::copy_constructor, value_type_cardinality * x.size()); - BOOST_TEST_GT(raii::move_constructor, 0u); + if (is_container_node_based::value) { + BOOST_TEST_EQ(raii::move_constructor, 0u); + } + else { + BOOST_TEST_GT(raii::move_constructor, 0u); + } BOOST_TEST_EQ(raii::copy_assignment, 0u); BOOST_TEST_EQ(raii::move_assignment, 0u); @@ -208,8 +212,8 @@ namespace { { template void operator()(std::vector& values, X& x) { - static constexpr auto value_type_cardinality = - value_cardinality::value; + static constexpr auto input_type_nonconst_cardinality = + value_nonconst_cardinality::value; std::atomic num_inserts{0}; thread_runner(values, [&x, &num_inserts](boost::span s) { @@ -235,10 +239,18 @@ namespace { } else { BOOST_TEST_EQ(raii::copy_constructor, 0u); } + + if (is_container_node_based::value) { + BOOST_TEST_EQ( + raii::move_constructor, input_type_nonconst_cardinality * x.size()); + } + else { + BOOST_TEST_GT( + raii::move_constructor, input_type_nonconst_cardinality * x.size()); + } #if defined(BOOST_MSVC) #pragma warning(pop) // C4127 #endif - BOOST_TEST_GT(raii::move_constructor, value_type_cardinality * x.size()); BOOST_TEST_EQ(raii::copy_assignment, 0u); BOOST_TEST_EQ(raii::move_assignment, 0u); @@ -280,7 +292,9 @@ namespace { } boost::unordered::concurrent_flat_map* map; + boost::unordered::concurrent_node_map* node_map; boost::unordered::concurrent_flat_set* set; + boost::unordered::concurrent_node_set* node_set; } // namespace @@ -292,7 +306,7 @@ using test::sequential; UNORDERED_TEST( emplace, - ((map)(set)) + ((map)(node_map)(set)(node_set)) ((value_type_generator_factory)(init_type_generator_factory)) ((lvalue_emplacer)(norehash_lvalue_emplacer) (lvalue_emplace_or_cvisit)(lvalue_emplace_or_visit)(copy_emplacer)(move_emplacer)) @@ -455,6 +469,8 @@ namespace { boost::unordered::concurrent_flat_map* test_counted_flat_map = {}; + boost::unordered::concurrent_node_map* + test_counted_node_map = {}; } // namespace @@ -462,7 +478,7 @@ namespace { UNORDERED_TEST( emplace_map_key_value, - ((test_counted_flat_map)) + ((test_counted_flat_map)(test_counted_node_map)) ((copy)(move)) ((counted_key_checker)(converting_key_checker)) ((counted_value_checker)(converting_value_checker)) diff --git a/test/cfoa/equality_tests.cpp b/test/cfoa/equality_tests.cpp index 391be096..b934b9f5 100644 --- a/test/cfoa/equality_tests.cpp +++ b/test/cfoa/equality_tests.cpp @@ -1,5 +1,5 @@ // Copyright (C) 2023 Christian Mazakas -// Copyright (C) 2023 Joaquin M Lopez Munoz +// Copyright (C) 2023-2024 Joaquin M Lopez Munoz // Distributed under the Boost Software License, Version 1.0. (See accompanying // file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt) @@ -7,6 +7,8 @@ #include #include +#include +#include test::seed_t initialize_seed{1634048962}; @@ -20,28 +22,38 @@ using key_equal = stateful_key_equal; using map_type = boost::unordered::concurrent_flat_map > >; +using node_map_type = boost::unordered::concurrent_node_map > >; + using set_type = boost::unordered::concurrent_flat_set >; +using node_set_type = boost::unordered::concurrent_node_set >; + map_type* test_map; +node_map_type* test_node_map; set_type* test_set; +node_set_type* test_node_set; namespace { - UNORDERED_AUTO_TEST (simple_map_equality) { - using allocator_type = map_type::allocator_type; + template + void simple_map_equality(X*) + { + using allocator_type = typename X::allocator_type; { - map_type x1( + X x1( {{1, 11}, {2, 22}}, 0, hasher(1), key_equal(2), allocator_type(3)); - map_type x2( + X x2( {{1, 11}, {2, 22}}, 0, hasher(2), key_equal(2), allocator_type(3)); - map_type x3( + X x3( {{1, 11}, {2, 23}}, 0, hasher(2), key_equal(2), allocator_type(3)); - map_type x4({{1, 11}}, 0, hasher(2), key_equal(2), allocator_type(3)); + X x4({{1, 11}}, 0, hasher(2), key_equal(2), allocator_type(3)); BOOST_TEST_EQ(x1.size(), x2.size()); BOOST_TEST(x1 == x2); @@ -57,17 +69,19 @@ namespace { } } - UNORDERED_AUTO_TEST (simple_set_equality) { - using allocator_type = set_type::allocator_type; + template + void simple_set_equality(X*) + { + using allocator_type = typename X::allocator_type; { - set_type x1( + X x1( {1, 2}, 0, hasher(1), key_equal(2), allocator_type(3)); - set_type x2( + X x2( {1, 2}, 0, hasher(2), key_equal(2), allocator_type(3)); - set_type x3({1}, 0, hasher(2), key_equal(2), allocator_type(3)); + X x3({1}, 0, hasher(2), key_equal(2), allocator_type(3)); BOOST_TEST_EQ(x1.size(), x2.size()); BOOST_TEST(x1 == x2); @@ -165,9 +179,17 @@ namespace { } // namespace // clang-format off +UNORDERED_TEST( + simple_map_equality, + ((test_map)(test_node_map))) + +UNORDERED_TEST( + simple_set_equality, + ((test_set)(test_node_set))) + UNORDERED_TEST( insert_and_compare, - ((test_map)(test_set)) + ((test_map)(test_node_map)(test_set)(test_node_set)) ((value_type_generator_factory)) ((default_generator)(sequential)(limited_range))) // clang-format on diff --git a/test/cfoa/erase_tests.cpp b/test/cfoa/erase_tests.cpp index 71f814cd..9fb11944 100644 --- a/test/cfoa/erase_tests.cpp +++ b/test/cfoa/erase_tests.cpp @@ -1,5 +1,5 @@ // Copyright (C) 2023 Christian Mazakas -// Copyright (C) 2023 Joaquin M Lopez Munoz +// Copyright (C) 2023-2024 Joaquin M Lopez Munoz // Distributed under the Boost Software License, Version 1.0. (See accompanying // file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt) @@ -7,6 +7,8 @@ #include #include +#include +#include #include @@ -439,11 +441,17 @@ namespace { } boost::unordered::concurrent_flat_map* map; + boost::unordered::concurrent_node_map* node_map; boost::unordered::concurrent_flat_set* set; + boost::unordered::concurrent_node_set* node_set; boost::unordered::concurrent_flat_map* transparent_map; + boost::unordered::concurrent_node_map* transparent_node_map; boost::unordered::concurrent_flat_set* transparent_set; + boost::unordered::concurrent_node_set* transparent_node_set; } // namespace @@ -454,14 +462,14 @@ using test::sequential; // clang-format off UNORDERED_TEST( erase, - ((map)(set)) + ((map)(node_map)(set)(node_set)) ((value_type_generator_factory)(init_type_generator_factory)) ((lvalue_eraser)(lvalue_eraser_if)(erase_if)(free_fn_erase_if)(erase_if_exec_policy)) ((default_generator)(sequential)(limited_range))) UNORDERED_TEST( erase, - ((transparent_map)(transparent_set)) + ((transparent_map)(transparent_node_map)(transparent_set)(transparent_node_set)) ((value_type_generator_factory)(init_type_generator_factory)) ((transp_lvalue_eraser)(transp_lvalue_eraser_if)(erase_if_exec_policy)) ((default_generator)(sequential)(limited_range))) diff --git a/test/cfoa/exception_assign_tests.cpp b/test/cfoa/exception_assign_tests.cpp index 96199973..873105f0 100644 --- a/test/cfoa/exception_assign_tests.cpp +++ b/test/cfoa/exception_assign_tests.cpp @@ -1,5 +1,5 @@ // Copyright (C) 2023 Christian Mazakas -// Copyright (C) 2023 Joaquin M Lopez Munoz +// Copyright (C) 2023-2024 Joaquin M Lopez Munoz // Distributed under the Boost Software License, Version 1.0. (See accompanying // file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt) @@ -7,6 +7,8 @@ #include #include +#include +#include using hasher = stateful_hash; using key_equal = stateful_key_equal; @@ -14,11 +16,19 @@ using key_equal = stateful_key_equal; using map_type = boost::unordered::concurrent_flat_map > >; +using node_map_type = boost::unordered::concurrent_node_map > >; + using set_type = boost::unordered::concurrent_flat_set >; +using node_set_type = boost::unordered::concurrent_node_set >; + map_type* test_map; +node_map_type* test_node_map; set_type* test_set; +node_set_type* test_node_set; std::initializer_list map_init_list{ {raii{0}, raii{0}}, @@ -71,7 +81,9 @@ std::initializer_list set_init_list{ }; auto test_map_and_init_list=std::make_pair(test_map,map_init_list); +auto test_node_map_and_init_list=std::make_pair(test_node_map,map_init_list); auto test_set_and_init_list=std::make_pair(test_set,set_init_list); +auto test_node_set_and_init_list=std::make_pair(test_node_set,set_init_list); namespace { test::seed_t initialize_seed(1794114520); @@ -206,19 +218,20 @@ using test::sequential; // clang-format off UNORDERED_TEST( copy_assign, - ((test_map)(test_set)) + ((test_map)(test_node_map)(test_set)(test_node_set)) ((exception_value_type_generator_factory)) ((default_generator)(sequential)(limited_range))) UNORDERED_TEST( move_assign, - ((test_map)(test_set)) + ((test_map)(test_node_map)(test_set)(test_node_set)) ((exception_value_type_generator_factory)) ((default_generator)(sequential))) UNORDERED_TEST( intializer_list_assign, - ((test_map_and_init_list)(test_set_and_init_list))) + ((test_map_and_init_list)(test_node_map_and_init_list) + (test_set_and_init_list)(test_node_set_and_init_list))) // clang-format on RUN_TESTS() diff --git a/test/cfoa/exception_constructor_tests.cpp b/test/cfoa/exception_constructor_tests.cpp index 58ea4fe2..4d89e2ef 100644 --- a/test/cfoa/exception_constructor_tests.cpp +++ b/test/cfoa/exception_constructor_tests.cpp @@ -1,5 +1,5 @@ // Copyright (C) 2023 Christian Mazakas -// Copyright (C) 2023 Joaquin M Lopez Munoz +// Copyright (C) 2023-2024 Joaquin M Lopez Munoz // Distributed under the Boost Software License, Version 1.0. (See accompanying // file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt) @@ -7,6 +7,8 @@ #include #include +#include +#include using hasher = stateful_hash; using key_equal = stateful_key_equal; @@ -14,11 +16,19 @@ using key_equal = stateful_key_equal; using map_type = boost::unordered::concurrent_flat_map > >; +using node_map_type = boost::unordered::concurrent_node_map > >; + using set_type = boost::unordered::concurrent_flat_set >; +using node_set_type = boost::unordered::concurrent_node_set >; + map_type* test_map; +node_map_type* test_node_map; set_type* test_set; +node_set_type* test_node_set; std::initializer_list map_init_list{ {raii{0}, raii{0}}, @@ -71,7 +81,9 @@ std::initializer_list set_init_list{ }; auto test_map_and_init_list=std::make_pair(test_map,map_init_list); +auto test_node_map_and_init_list=std::make_pair(test_node_map,map_init_list); auto test_set_and_init_list=std::make_pair(test_set,set_init_list); +auto test_node_set_and_init_list=std::make_pair(test_node_set,set_init_list); namespace { test::seed_t initialize_seed(795610904); @@ -339,29 +351,30 @@ using test::sequential; // clang-format off UNORDERED_TEST( bucket_constructor, - ((test_map)(test_set))) + ((test_map)(test_node_map)(test_set)(test_node_set))) UNORDERED_TEST( iterator_range, - ((test_map)(test_set)) + ((test_map)(test_node_map)(test_set)(test_node_set)) ((exception_value_type_generator_factory)) ((default_generator)(sequential)(limited_range))) UNORDERED_TEST( copy_constructor, - ((test_map)(test_set)) + ((test_map)(test_node_map)(test_set)(test_node_set)) ((exception_value_type_generator_factory)) ((default_generator)(sequential))) UNORDERED_TEST( move_constructor, - ((test_map)(test_set)) + ((test_map)(test_node_map)(test_set)(test_node_set)) ((exception_value_type_generator_factory)) ((default_generator)(sequential))) UNORDERED_TEST( initializer_list_bucket_count, - ((test_map_and_init_list)(test_set_and_init_list))) + ((test_map_and_init_list)(test_node_map_and_init_list) + (test_set_and_init_list)(test_node_set_and_init_list))) // clang-format on RUN_TESTS() diff --git a/test/cfoa/exception_erase_tests.cpp b/test/cfoa/exception_erase_tests.cpp index 32a51f5e..34c16222 100644 --- a/test/cfoa/exception_erase_tests.cpp +++ b/test/cfoa/exception_erase_tests.cpp @@ -1,5 +1,5 @@ // Copyright (C) 2023 Christian Mazakas -// Copyright (C) 2023 Joaquin M Lopez Munoz +// Copyright (C) 2023-2024 Joaquin M Lopez Munoz // Distributed under the Boost Software License, Version 1.0. (See accompanying // file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt) @@ -7,6 +7,8 @@ #include #include +#include +#include #include @@ -283,8 +285,12 @@ namespace { boost::unordered::concurrent_flat_map > >* map; + boost::unordered::concurrent_node_map > >* node_map; boost::unordered::concurrent_flat_set >* set; + boost::unordered::concurrent_node_set >* node_set; } // namespace @@ -295,7 +301,7 @@ using test::sequential; // clang-format off UNORDERED_TEST( erase, - ((map)(set)) + ((map)(node_map)(set)(node_set)) ((exception_value_type_generator_factory) (exception_init_type_generator_factory)) ((lvalue_eraser)(lvalue_eraser_if)(erase_if)(free_fn_erase_if)) diff --git a/test/cfoa/exception_insert_tests.cpp b/test/cfoa/exception_insert_tests.cpp index d9b22c31..73d91e1c 100644 --- a/test/cfoa/exception_insert_tests.cpp +++ b/test/cfoa/exception_insert_tests.cpp @@ -1,5 +1,5 @@ // Copyright (C) 2023 Christian Mazakas -// Copyright (C) 2023 Joaquin M Lopez Munoz +// Copyright (C) 2023-2024 Joaquin M Lopez Munoz // Distributed under the Boost Software License, Version 1.0. (See accompanying // file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt) @@ -7,6 +7,8 @@ #include #include +#include +#include #include @@ -153,7 +155,15 @@ namespace { BOOST_TEST_EQ(raii::default_constructor, 0u); BOOST_TEST_GT(raii::copy_constructor, 0u); - BOOST_TEST_GT(raii::move_constructor, 0u); + + if (is_container_node_based::value) { + BOOST_TEST_EQ(raii::move_constructor, 0u); + } + else{ + // don't check move construction count here because of rehashing + BOOST_TEST_GT(raii::move_constructor, 0u); + } + BOOST_TEST_EQ(raii::move_assignment, 0u); } } lvalue_insert_or_assign_copy_assign; @@ -198,7 +208,7 @@ namespace { BOOST_TEST_EQ(raii::default_constructor, 0u); BOOST_TEST_GT(raii::copy_constructor, 0u); - BOOST_TEST_GT(raii::move_constructor, x.size()); // rehashing + BOOST_TEST_GT(raii::move_constructor, 0u); BOOST_TEST_EQ(raii::move_assignment, 0u); } } rvalue_insert_or_assign_copy_assign; @@ -249,8 +259,15 @@ namespace { BOOST_TEST_GT(num_inserts, 0u); BOOST_TEST_EQ(raii::default_constructor, 0u); - // don't check move construction count here because of rehashing - BOOST_TEST_GT(raii::move_constructor, 0u); + + if (is_container_node_based::value) { + BOOST_TEST_EQ(raii::move_constructor, 0u); + } + else{ + // don't check move construction count here because of rehashing + BOOST_TEST_GT(raii::move_constructor, 0u); + } + BOOST_TEST_EQ(raii::move_assignment, 0u); } } lvalue_insert_or_cvisit; @@ -288,8 +305,14 @@ namespace { BOOST_TEST_EQ(raii::default_constructor, 0u); - // don't check move construction count here because of rehashing - BOOST_TEST_GT(raii::move_constructor, 0u); + if (is_container_node_based::value) { + BOOST_TEST_EQ(raii::move_constructor, 0u); + } + else{ + // don't check move construction count here because of rehashing + BOOST_TEST_GT(raii::move_constructor, 0u); + } + BOOST_TEST_EQ(raii::move_assignment, 0u); } } lvalue_insert_or_visit; @@ -426,8 +449,12 @@ namespace { boost::unordered::concurrent_flat_map > >* map; + boost::unordered::concurrent_node_map > >* node_map; boost::unordered::concurrent_flat_set >* set; + boost::unordered::concurrent_node_set >* node_set; } // namespace @@ -438,7 +465,7 @@ using test::sequential; // clang-format off UNORDERED_TEST( insert, - ((map)(set)) + ((map)(node_map)(set)(node_set)) ((exception_value_type_generator_factory) (exception_init_type_generator_factory)) ((lvalue_inserter)(rvalue_inserter)(iterator_range_inserter) @@ -450,7 +477,7 @@ UNORDERED_TEST( UNORDERED_TEST( insert, - ((map)) + ((map)(node_map)) ((exception_init_type_generator_factory)) ((lvalue_insert_or_assign_copy_assign)(lvalue_insert_or_assign_move_assign) (rvalue_insert_or_assign_copy_assign)(rvalue_insert_or_assign_move_assign)) diff --git a/test/cfoa/exception_merge_tests.cpp b/test/cfoa/exception_merge_tests.cpp index d9e0cc2f..f70053f4 100644 --- a/test/cfoa/exception_merge_tests.cpp +++ b/test/cfoa/exception_merge_tests.cpp @@ -1,5 +1,5 @@ // Copyright (C) 2023 Christian Mazakas -// Copyright (C) 2023 Joaquin M Lopez Munoz +// Copyright (C) 2023-2024 Joaquin M Lopez Munoz // Distributed under the Boost Software License, Version 1.0. (See accompanying // file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt) @@ -7,6 +7,8 @@ #include #include +#include +#include #include @@ -16,11 +18,19 @@ using key_equal = stateful_key_equal; using map_type = boost::unordered::concurrent_flat_map > >; +using node_map_type = boost::unordered::concurrent_node_map > >; + using set_type = boost::unordered::concurrent_flat_set >; +using node_set_type = boost::unordered::concurrent_node_set >; + map_type* test_map; +node_map_type* test_node_map; set_type* test_set; +node_set_type* test_node_set; namespace { test::seed_t initialize_seed(223333016); @@ -79,7 +89,7 @@ using test::sequential; // clang-format off UNORDERED_TEST( merge, - ((test_map)(test_set)) + ((test_map)(test_node_map)(test_set)(test_node_set)) ((exception_value_type_generator_factory)) ((default_generator)(sequential)(limited_range))) diff --git a/test/cfoa/explicit_alloc_ctor_tests.cpp b/test/cfoa/explicit_alloc_ctor_tests.cpp index 7f87a060..157b65c9 100644 --- a/test/cfoa/explicit_alloc_ctor_tests.cpp +++ b/test/cfoa/explicit_alloc_ctor_tests.cpp @@ -1,6 +1,6 @@ -// Copyright 2024 Joaquin M Lopez Muoz. +// Copyright 2024 Joaquin M Lopez Munoz. // Distributed under the Boost Software License, Version 1.0. (See accompanying -// file LICENSE_1_0.txt or copy at htT://www.boost.org/LICENSE_1_0.txt) +// file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt) #define BOOST_UNORDERED_CFOA_TESTS #include "../unordered/explicit_alloc_ctor_tests.cpp" diff --git a/test/cfoa/extract_insert_tests.cpp b/test/cfoa/extract_insert_tests.cpp new file mode 100644 index 00000000..a3619a22 --- /dev/null +++ b/test/cfoa/extract_insert_tests.cpp @@ -0,0 +1,162 @@ +// Copyright (C) 2024 Joaquin M Lopez Munoz +// Distributed under the Boost Software License, Version 1.0. (See accompanying +// file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt) + +#include "helpers.hpp" + +#include +#include +#include + +using hasher = stateful_hash; +using key_equal = stateful_key_equal; + +using node_map_type = boost::unordered::concurrent_node_map > >; + +using node_set_type = boost::unordered::concurrent_node_set >; + +node_map_type* test_node_map; +node_set_type* test_node_set; + +namespace { + + template + void extract_insert_tests(X*, GF gen_factory) + { + using value_type = typename X::value_type; + using allocator_type = typename X::allocator_type; + + // set visit is always const access + using arg_visit_type = typename std::conditional< + std::is_same::value, + typename X::value_type const, + typename X::value_type + >::type; + + test::random_generator rg = test::sequential; + auto gen = gen_factory.template get(); + auto values = make_random_values(1024 * 16, [&] { return gen(rg); }); + + + X in(0,hasher(1),key_equal(2), allocator_type(3)); + std::vector out(2,in); + for(std::size_t i = 0; i < values.size(); ++i) { + in.insert(values[i]); + out[i % 3 == 0? 0 : 1].insert(values[i]); + } + + raii::reset_counts(); + + thread_runner(values, [&](boost::span s) { + std::size_t br1 = 0, br2 = 0, br3 = 0; + + for(auto const& v: s) { + typename X::node_type nh; + + while (nh.empty()) { + switch (br1++ % 3) { + case 0: + nh = in.extract(test::get_key(v)); + BOOST_ASSERT(!nh.empty()); + break; + case 1: + nh = in.extract_if( + test::get_key(v), [&](arg_visit_type& v2) { + BOOST_ASSERT(test::get_key(v) == test::get_key(v2)); + (void)v2; + return false; + }); + BOOST_ASSERT(nh.empty()); + break; + case 2: default: + nh = in.extract_if( + test::get_key(v), [&](arg_visit_type& v2) { + BOOST_ASSERT(test::get_key(v) == test::get_key(v2)); + (void)v2; + return true; + }); + BOOST_ASSERT(!nh.empty()); + break; + } + } + BOOST_ASSERT(nh.get_allocator() == in.get_allocator()); + + while (!nh.empty()) { + auto& o = out[br2++ % out.size()]; + typename X::insert_return_type r; + switch (br3++ % 3) { + case 0: + r = o.insert(std::move(nh)); + break; + case 1: + r = o.insert_or_visit( + std::move(nh), [&](arg_visit_type& v2) { + BOOST_ASSERT(test::get_key(v) == test::get_key(v2)); + (void)v2; + }); + break; + case 2: default: + r = o.insert_or_cvisit( + std::move(nh), [&](arg_visit_type const& v2) { + BOOST_ASSERT(test::get_key(v) == test::get_key(v2)); + (void)v2; + }); + break; + } + BOOST_ASSERT(r.inserted || !r.node.empty()); + nh = std::move(r.node); + } + } + }); + + BOOST_TEST_EQ(in.size(), 0u); + BOOST_TEST_EQ(out[0].size() + out[1].size(), 2 * values.size()); + BOOST_TEST_EQ(raii::default_constructor, 0u); + BOOST_TEST_EQ(raii::copy_constructor, 0u); + BOOST_TEST_EQ(raii::move_constructor, 0u); + BOOST_TEST_EQ(raii::destructor, 0u); + } + + template + void insert_empty_node_tests(X*) + { + using value_type = typename X::value_type; + using node_type = typename X::node_type ; + + X x; + { + node_type nh; + auto r = x.insert(std::move(nh)); + BOOST_TEST(!r.inserted); + BOOST_TEST(r.node.empty()); + } + { + node_type nh; + auto r = x.insert_or_visit(std::move(nh), [](value_type const&) {}); + BOOST_TEST(!r.inserted); + BOOST_TEST(r.node.empty()); + } + { + node_type nh; + auto r = x.insert_or_cvisit(std::move(nh), [](value_type const&) {}); + BOOST_TEST(!r.inserted); + BOOST_TEST(r.node.empty()); + } + } + +} // namespace + +// clang-format off +UNORDERED_TEST( + extract_insert_tests, + ((test_node_map)(test_node_set)) + ((value_type_generator_factory))) + +UNORDERED_TEST( + insert_empty_node_tests, + ((test_node_map)(test_node_set))) +// clang-format on + +RUN_TESTS() diff --git a/test/cfoa/fwd_tests.cpp b/test/cfoa/fwd_tests.cpp index 5b37dddd..6cf6d867 100644 --- a/test/cfoa/fwd_tests.cpp +++ b/test/cfoa/fwd_tests.cpp @@ -1,5 +1,5 @@ // Copyright (C) 2023 Christian Mazakas -// Copyright (C) 2023 Joaquin M Lopez Munoz +// Copyright (C) 2023-2024 Joaquin M Lopez Munoz // Distributed under the Boost Software License, Version 1.0. (See accompanying // file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt) @@ -7,6 +7,8 @@ #include #include #include +#include +#include #include test::seed_t initialize_seed{32304628}; @@ -36,6 +38,27 @@ bool unequal_call(boost::unordered::concurrent_flat_map& x1, return x1 != x2; } +template +void swap_call(boost::unordered::concurrent_node_map& x1, + boost::unordered::concurrent_node_map& x2) +{ + swap(x1, x2); +} + +template +bool equal_call(boost::unordered::concurrent_node_map& x1, + boost::unordered::concurrent_node_map& x2) +{ + return x1 == x2; +} + +template +bool unequal_call(boost::unordered::concurrent_node_map& x1, + boost::unordered::concurrent_node_map& x2) +{ + return x1 != x2; +} + template void swap_call(boost::unordered::concurrent_flat_set& x1, boost::unordered::concurrent_flat_set& x2) @@ -57,14 +80,41 @@ bool unequal_call(boost::unordered::concurrent_flat_set& x1, return x1 != x2; } +template +void swap_call(boost::unordered::concurrent_node_set& x1, + boost::unordered::concurrent_node_set& x2) +{ + swap(x1, x2); +} + +template +bool equal_call(boost::unordered::concurrent_node_set& x1, + boost::unordered::concurrent_node_set& x2) +{ + return x1 == x2; +} + +template +bool unequal_call(boost::unordered::concurrent_node_set& x1, + boost::unordered::concurrent_node_set& x2) +{ + return x1 != x2; +} + #include #include +#include +#include using map_type = boost::unordered::concurrent_flat_map; -using set_type = boost::unordered::concurrent_flat_map; +using node_map_type = boost::unordered::concurrent_node_map; +using set_type = boost::unordered::concurrent_flat_set; +using node_set_type = boost::unordered::concurrent_node_set; map_type* test_map; +node_map_type* test_node_map; set_type* test_set; +node_set_type* test_node_set; template void fwd_swap_call(X*) @@ -106,19 +156,19 @@ void max_size(X*) // clang-format off UNORDERED_TEST( fwd_swap_call, - ((test_map)(test_set))) + ((test_map)(test_node_map)(test_set)(test_node_set))) UNORDERED_TEST( fwd_equal_call, - ((test_map)(test_set))) + ((test_map)(test_node_map)(test_set)(test_node_set))) UNORDERED_TEST( fwd_unequal_call, - ((test_map)(test_set))) + ((test_map)(test_node_map)(test_set)(test_node_set))) UNORDERED_TEST( max_size, - ((test_map)(test_set))) + ((test_map)(test_node_map)(test_set)(test_node_set))) // clang-format on RUN_TESTS() diff --git a/test/cfoa/helpers.hpp b/test/cfoa/helpers.hpp index ef58f3fe..cca8d115 100644 --- a/test/cfoa/helpers.hpp +++ b/test/cfoa/helpers.hpp @@ -1,5 +1,5 @@ // Copyright (C) 2023 Christian Mazakas -// Copyright (C) 2023 Joaquin M Lopez Munoz +// Copyright (C) 2023-2024 Joaquin M Lopez Munoz // Copyright (C) 2024 Braden Ganetsky // Distributed under the Boost Software License, Version 1.0. (See accompanying // file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt) @@ -147,9 +147,15 @@ public: cfoa_ptr(std::nullptr_t) : p_(nullptr){}; template using rebind = cfoa_ptr; + operator bool() const { return !!p_; } + + template + Q& operator*() const noexcept { return *p_; } + T* operator->() const noexcept { return p_; } - static cfoa_ptr pointer_to(element_type& r) { return {std::addressof(r)}; } + template + static cfoa_ptr pointer_to(Q& r) { return {std::addressof(r)}; } }; template struct stateful_allocator @@ -670,4 +676,17 @@ public: fancy_allocator& operator=(fancy_allocator const&) { return *this; } }; +namespace boost { + template <> struct pointer_traits + { + template struct rebind_to + { + typedef ptr type; + }; + + template + using rebind=typename rebind_to::type; + }; +} // namespace boost + #endif // BOOST_UNORDERED_TEST_CFOA_HELPERS_HPP diff --git a/test/cfoa/insert_tests.cpp b/test/cfoa/insert_tests.cpp index 8f1f3933..4747a746 100644 --- a/test/cfoa/insert_tests.cpp +++ b/test/cfoa/insert_tests.cpp @@ -1,5 +1,5 @@ // Copyright (C) 2023 Christian Mazakas -// Copyright (C) 2023 Joaquin M Lopez Munoz +// Copyright (C) 2023-2024 Joaquin M Lopez Munoz // Distributed under the Boost Software License, Version 1.0. (See accompanying // file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt) @@ -8,6 +8,8 @@ #include #include #include +#include +#include #include @@ -179,8 +181,15 @@ namespace { BOOST_TEST_EQ(raii::default_constructor, 0u); BOOST_TEST_EQ(raii::copy_constructor, 2 * x.size()); - // don't check move construction count here because of rehashing - BOOST_TEST_GT(raii::move_constructor, 0u); + + if (is_container_node_based::value) { + BOOST_TEST_EQ(raii::move_constructor, 0u); + } + else{ + // don't check move construction count here because of rehashing + BOOST_TEST_GT(raii::move_constructor, 0u); + } + BOOST_TEST_EQ(raii::copy_assignment, values.size() - x.size()); BOOST_TEST_EQ(raii::move_assignment, 0u); } @@ -198,7 +207,14 @@ namespace { BOOST_TEST_EQ(raii::default_constructor, 0u); BOOST_TEST_EQ(raii::copy_constructor, x.size()); - BOOST_TEST_GT(raii::move_constructor, x.size()); // rehashing + + if (is_container_node_based::value) { + BOOST_TEST_EQ(raii::move_constructor, x.size()); + } + else{ + BOOST_TEST_GT(raii::move_constructor, x.size()); // rehashing + } + BOOST_TEST_EQ(raii::copy_assignment, 0u); BOOST_TEST_EQ(raii::move_assignment, values.size() - x.size()); } @@ -216,7 +232,14 @@ namespace { BOOST_TEST_EQ(raii::default_constructor, 0u); BOOST_TEST_EQ(raii::copy_constructor, x.size()); - BOOST_TEST_GT(raii::move_constructor, x.size()); // rehashing + + if (is_container_node_based::value) { + BOOST_TEST_EQ(raii::move_constructor, x.size()); + } + else{ + BOOST_TEST_GT(raii::move_constructor, x.size()); // rehashing + } + BOOST_TEST_EQ(raii::copy_assignment, values.size() - x.size()); BOOST_TEST_EQ(raii::move_assignment, 0u); } @@ -234,7 +257,14 @@ namespace { BOOST_TEST_EQ(raii::default_constructor, 0u); BOOST_TEST_EQ(raii::copy_constructor, 0u); - BOOST_TEST_GE(raii::move_constructor, 2 * x.size()); + + if (is_container_node_based::value) { + BOOST_TEST_EQ(raii::move_constructor, 2 * x.size()); + } + else{ + BOOST_TEST_GE(raii::move_constructor, 2 * x.size()); // rehashing + } + BOOST_TEST_EQ(raii::copy_assignment, 0u); BOOST_TEST_EQ(raii::move_assignment, values.size() - x.size()); } @@ -260,7 +290,14 @@ namespace { BOOST_TEST_EQ(raii::default_constructor, x.size()); BOOST_TEST_EQ(raii::copy_constructor, x.size()); - BOOST_TEST_GT(raii::move_constructor, x.size()); // rehashing + + if (is_container_node_based::value) { + BOOST_TEST_EQ(raii::move_constructor, 0u); + } + else{ + BOOST_TEST_GT(raii::move_constructor, 0u); // rehashing + } + BOOST_TEST_EQ(raii::copy_assignment, values.size() - x.size()); BOOST_TEST_EQ(raii::move_assignment, 0u); } @@ -284,7 +321,14 @@ namespace { BOOST_TEST_EQ(raii::default_constructor, x.size()); BOOST_TEST_EQ(raii::copy_constructor, 0u); - BOOST_TEST_GT(raii::move_constructor, 2 * x.size()); // rehashing + + if (is_container_node_based::value) { + BOOST_TEST_EQ(raii::move_constructor, x.size()); + } + else{ + BOOST_TEST_GT(raii::move_constructor, x.size()); // rehashing + } + BOOST_TEST_EQ(raii::copy_assignment, 0u); BOOST_TEST_EQ(raii::move_assignment, values.size() - x.size()); } @@ -319,8 +363,15 @@ namespace { BOOST_TEST_EQ(raii::default_constructor, 0u); BOOST_TEST_EQ( raii::copy_constructor, value_type_cardinality * x.size()); - // don't check move construction count here because of rehashing - BOOST_TEST_GT(raii::move_constructor, 0u); + + if (is_container_node_based::value) { + BOOST_TEST_EQ(raii::move_constructor, 0u); + } + else{ + // don't check move construction count here because of rehashing + BOOST_TEST_GT(raii::move_constructor, 0u); + } + BOOST_TEST_EQ(raii::move_assignment, 0u); } } lvalue_insert_or_cvisit; @@ -360,8 +411,15 @@ namespace { BOOST_TEST_EQ(raii::default_constructor, 0u); BOOST_TEST_EQ(raii::copy_constructor, value_type_cardinality * x.size()); - // don't check move construction count here because of rehashing - BOOST_TEST_GT(raii::move_constructor, 0u); + + if (is_container_node_based::value) { + BOOST_TEST_EQ(raii::move_constructor, 0u); + } + else{ + // don't check move construction count here because of rehashing + BOOST_TEST_GT(raii::move_constructor, 0u); + } + BOOST_TEST_EQ(raii::move_assignment, 0u); } } lvalue_insert_or_visit; @@ -665,12 +723,14 @@ namespace { } } - UNORDERED_AUTO_TEST (insert_sfinae_test) { + + template + void insert_map_sfinae_test(X*) + { // mostly a compile-time tests to ensure that there's no ambiguity when a // user does this - using value_type = - typename boost::unordered::concurrent_flat_map::value_type; - boost::unordered::concurrent_flat_map x; + using value_type = typename X::value_type; + X x; x.insert({1, 2}); x.insert_or_visit({2, 3}, [](value_type&) {}); @@ -684,11 +744,23 @@ namespace { std::equal_to, fancy_allocator > >* fancy_map; + boost::unordered::concurrent_node_map* node_map; + boost::unordered::concurrent_node_map* trans_node_map; + boost::unordered::concurrent_node_map, + std::equal_to, fancy_allocator > >* + fancy_node_map; + boost::unordered::concurrent_flat_set* set; boost::unordered::concurrent_flat_set, - std::equal_to, fancy_allocator > >* + std::equal_to, fancy_allocator >* fancy_set; + boost::unordered::concurrent_node_set* node_set; + boost::unordered::concurrent_node_set, + std::equal_to, fancy_allocator >* + fancy_node_set; + std::initializer_list > map_init_list{ {raii{0}, raii{0}}, {raii{1}, raii{1}}, @@ -740,7 +812,9 @@ namespace { }; auto map_and_init_list=std::make_pair(map,map_init_list); + auto node_map_and_init_list=std::make_pair(node_map,map_init_list); auto set_and_init_list=std::make_pair(set,set_init_list); + auto node_set_and_init_list=std::make_pair(node_set,set_init_list); } // namespace @@ -751,11 +825,12 @@ using test::sequential; // clang-format off UNORDERED_TEST( insert_initializer_list, - ((map_and_init_list)(set_and_init_list))) + ((map_and_init_list)(node_map_and_init_list)(set_and_init_list))) UNORDERED_TEST( insert, - ((map)(fancy_map)(set)(fancy_set)) + ((map)(fancy_map)(node_map)(fancy_node_map) + (set)(fancy_set)(node_set)(fancy_node_set)) ((value_type_generator_factory)(init_type_generator_factory)) ((lvalue_inserter)(rvalue_inserter)(iterator_range_inserter) (norehash_lvalue_inserter)(norehash_rvalue_inserter) @@ -766,7 +841,7 @@ UNORDERED_TEST( UNORDERED_TEST( insert, - ((map)) + ((map)(node_map)) ((init_type_generator_factory)) ((lvalue_insert_or_assign_copy_assign)(lvalue_insert_or_assign_move_assign) (rvalue_insert_or_assign_copy_assign)(rvalue_insert_or_assign_move_assign)) @@ -774,10 +849,14 @@ UNORDERED_TEST( UNORDERED_TEST( insert, - ((trans_map)) + ((trans_map)(trans_node_map)) ((init_type_generator_factory)) ((trans_insert_or_assign_copy_assign)(trans_insert_or_assign_move_assign)) ((default_generator)(sequential)(limited_range))) + +UNORDERED_TEST( + insert_map_sfinae_test, + ((map)(node_map))) // clang-format on RUN_TESTS() diff --git a/test/cfoa/merge_tests.cpp b/test/cfoa/merge_tests.cpp index eab8cc29..651c724b 100644 --- a/test/cfoa/merge_tests.cpp +++ b/test/cfoa/merge_tests.cpp @@ -1,5 +1,5 @@ // Copyright (C) 2023 Christian Mazakas -// Copyright (C) 2023 Joaquin M Lopez Munoz +// Copyright (C) 2023-2024 Joaquin M Lopez Munoz // Distributed under the Boost Software License, Version 1.0. (See accompanying // file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt) @@ -7,6 +7,8 @@ #include #include +#include +#include test::seed_t initialize_seed{402031699}; @@ -23,19 +25,38 @@ using map2_type = boost::unordered::concurrent_flat_map, std::equal_to, stateful_allocator > >; +using node_map_type = boost::unordered::concurrent_node_map > >; +using node_map2_type = boost::unordered::concurrent_node_map, std::equal_to, + stateful_allocator > >; + using set_type = boost::unordered::concurrent_flat_set >; using set2_type = boost::unordered::concurrent_flat_set, std::equal_to, stateful_allocator >; +using node_set_type = boost::unordered::concurrent_node_set >; +using node_set2_type = boost::unordered::concurrent_node_set, + std::equal_to, stateful_allocator >; + map_type* test_map; map2_type* test_map2; auto test_maps=std::make_pair(test_map,test_map2); +node_map_type* test_node_map; +node_map2_type* test_node_map2; +auto test_node_maps=std::make_pair(test_node_map,test_node_map2); + set_type* test_set; set2_type* test_set2; auto test_sets=std::make_pair(test_set,test_set2); +node_set_type* test_node_set; +node_set2_type* test_node_set2; +auto test_node_sets=std::make_pair(test_node_set,test_node_set2); + struct { template @@ -91,9 +112,16 @@ namespace { }); BOOST_TEST_EQ(raii::copy_constructor, old_cc + expected_copies); - BOOST_TEST_EQ( - raii::move_constructor, - value_type_cardinality * reference_cont.size()); + + if (is_container_node_based::value) { + BOOST_TEST_EQ(raii::move_constructor, 0u); + } + else{ + BOOST_TEST_EQ( + raii::move_constructor, + value_type_cardinality * reference_cont.size()); + } + BOOST_TEST_EQ(+num_merged, reference_cont.size()); test_fuzzy_matches_reference(x, reference_cont, rg); @@ -210,10 +238,17 @@ namespace { t3.join(); if (num_merges > 0) { - // num merges is 0 most commonly in the cast of the limited_range - // generator as both maps will contains keys from 0 to 99 - BOOST_TEST_EQ( - +raii::move_constructor, value_type_cardinality * num_merges); + + if (is_container_node_based::value) { + BOOST_TEST_EQ(raii::move_constructor, 0u); + } + else{ + // num merges is 0 most commonly in the cast of the limited_range + // generator as both maps will contains keys from 0 to 99 + BOOST_TEST_EQ( + raii::move_constructor, value_type_cardinality * num_merges); + } + BOOST_TEST_GE(call_count, 1u); } @@ -229,14 +264,14 @@ namespace { // clang-format off UNORDERED_TEST( merge_tests, - ((test_maps)(test_sets)) + ((test_maps)(test_node_maps)(test_sets)(test_node_sets)) ((lvalue_merge)(rvalue_merge)) ((value_type_generator_factory)) ((default_generator)(sequential)(limited_range))) UNORDERED_TEST( insert_and_merge_tests, - ((test_maps)(test_sets)) + ((test_maps)(test_node_maps)(test_sets)(test_node_sets)) ((value_type_generator_factory)) ((default_generator)(sequential)(limited_range))) // clang-format on diff --git a/test/cfoa/node_handle_allocator_tests.cpp b/test/cfoa/node_handle_allocator_tests.cpp new file mode 100644 index 00000000..b7ac6290 --- /dev/null +++ b/test/cfoa/node_handle_allocator_tests.cpp @@ -0,0 +1,6 @@ +// Copyright 2024 Joaquin M Lopez Munoz. +// Distributed under the Boost Software License, Version 1.0. (See accompanying +// file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt) + +#define BOOST_UNORDERED_CFOA_TESTS +#include "../unordered/node_handle_allocator_tests.cpp" diff --git a/test/cfoa/pmr_allocator_tests.cpp b/test/cfoa/pmr_allocator_tests.cpp index 0c0df338..d0f2eb41 100644 --- a/test/cfoa/pmr_allocator_tests.cpp +++ b/test/cfoa/pmr_allocator_tests.cpp @@ -5,4 +5,6 @@ #define BOOST_UNORDERED_CFOA_TESTS #include #include +#include +#include #include "../unordered/pmr_allocator_tests.cpp" diff --git a/test/cfoa/reentrancy_check_test.cpp b/test/cfoa/reentrancy_check_test.cpp index 0bbb41b0..f843fccc 100644 --- a/test/cfoa/reentrancy_check_test.cpp +++ b/test/cfoa/reentrancy_check_test.cpp @@ -1,4 +1,4 @@ -// Copyright 2023 Joaquin M Lopez Munoz +// Copyright 2023-2024 Joaquin M Lopez Munoz // Distributed under the Boost Software License, Version 1.0. // https://www.boost.org/LICENSE_1_0.txt @@ -32,15 +32,21 @@ namespace boost { #include #include +#include +#include #include using test::default_generator; using map_type = boost::unordered::concurrent_flat_map; +using node_map_type = boost::unordered::concurrent_node_map; using set_type = boost::unordered::concurrent_flat_set; +using node_set_type = boost::unordered::concurrent_node_set; map_type* test_map; +map_type* test_node_map; set_type* test_set; +node_set_type* test_node_set; template void detect_reentrancy(F f) @@ -105,7 +111,7 @@ namespace { // clang-format off UNORDERED_TEST( reentrancy_tests, - ((test_map)(test_set)) + ((test_map)(test_node_map)(test_set)(test_node_set)) ((value_type_generator_factory)) ((default_generator))) // clang-format on diff --git a/test/cfoa/rehash_tests.cpp b/test/cfoa/rehash_tests.cpp index fd0b31df..b9189feb 100644 --- a/test/cfoa/rehash_tests.cpp +++ b/test/cfoa/rehash_tests.cpp @@ -1,5 +1,5 @@ // Copyright (C) 2023 Christian Mazakas -// Copyright (C) 2023 Joaquin M Lopez Munoz +// Copyright (C) 2023-2024 Joaquin M Lopez Munoz // Distributed under the Boost Software License, Version 1.0. (See accompanying // file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt) @@ -7,6 +7,8 @@ #include #include +#include +#include using test::default_generator; using test::limited_range; @@ -18,11 +20,19 @@ using key_equal = stateful_key_equal; using map_type = boost::unordered::concurrent_flat_map > >; +using node_map_type = boost::unordered::concurrent_node_map > >; + using set_type = boost::unordered::concurrent_flat_set >; +using node_set_type = boost::unordered::concurrent_node_set >; + map_type* test_map; +node_map_type* test_node_map; set_type* test_set; +node_set_type* test_node_set; namespace { test::seed_t initialize_seed{748775921}; @@ -187,15 +197,15 @@ namespace { // clang-format off UNORDERED_TEST( rehash_no_insert, - ((test_map)(test_set))) + ((test_map)(test_node_map)(test_set)(test_node_set))) UNORDERED_TEST( reserve_no_insert, - ((test_map)(test_set))) + ((test_map)(test_node_map)(test_set)(test_node_set))) UNORDERED_TEST( insert_and_erase_with_rehash, - ((test_map)(test_set)) + ((test_map)(test_node_map)(test_set)(test_node_set)) ((value_type_generator_factory)) ((default_generator)(sequential)(limited_range))) // clang-format on diff --git a/test/cfoa/serialization_tests.cpp b/test/cfoa/serialization_tests.cpp index 135567c5..fa928e05 100644 --- a/test/cfoa/serialization_tests.cpp +++ b/test/cfoa/serialization_tests.cpp @@ -1,4 +1,4 @@ -// Copyright (C) 2023 Joaquin M Lopez Munoz +// Copyright (C) 2023-2024 Joaquin M Lopez Munoz // Distributed under the Boost Software License, Version 1.0. (See accompanying // file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt) @@ -12,6 +12,8 @@ #include #include #include +#include +#include namespace { @@ -70,11 +72,15 @@ namespace { boost::concurrent_flat_map< test::object, test::object, test::hash, test::equal_to>* test_flat_map; + boost::concurrent_node_map< + test::object, test::object, test::hash, test::equal_to>* test_node_map; boost::concurrent_flat_set< test::object, test::hash, test::equal_to>* test_flat_set; + boost::concurrent_node_set< + test::object, test::hash, test::equal_to>* test_node_set; UNORDERED_TEST(serialization_tests, - ((test_flat_map)(test_flat_set)) + ((test_flat_map)(test_node_map)(test_flat_set)(test_node_set)) ((text_archive)(xml_archive)) ((default_generator))) } diff --git a/test/cfoa/stats_tests.cpp b/test/cfoa/stats_tests.cpp index 043f7570..1d2a7daf 100644 --- a/test/cfoa/stats_tests.cpp +++ b/test/cfoa/stats_tests.cpp @@ -1,4 +1,4 @@ -// Copyright 2024 Joaquin M Lopez Muoz. +// Copyright 2024 Joaquin M Lopez Munoz. // Distributed under the Boost Software License, Version 1.0. (See accompanying // file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt) diff --git a/test/cfoa/swap_tests.cpp b/test/cfoa/swap_tests.cpp index 42408e24..a5812844 100644 --- a/test/cfoa/swap_tests.cpp +++ b/test/cfoa/swap_tests.cpp @@ -1,5 +1,5 @@ // Copyright (C) 2023 Christian Mazakas -// Copyright (C) 2023 Joaquin M Lopez Munoz +// Copyright (C) 2023-2024 Joaquin M Lopez Munoz // Distributed under the Boost Software License, Version 1.0. (See accompanying // file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt) @@ -7,6 +7,8 @@ #include #include +#include +#include test::seed_t initialize_seed{996130204}; @@ -62,9 +64,15 @@ using key_equal = stateful_key_equal; using map_type = boost::unordered::concurrent_flat_map > >; +using node_map_type = boost::unordered::concurrent_node_map > >; + using set_type = boost::unordered::concurrent_flat_set >; +using node_set_type = boost::unordered::concurrent_node_set >; + template struct is_nothrow_member_swappable { static bool const value = @@ -293,21 +301,28 @@ namespace { map_type* map; replace_allocator* pocs_map; + node_map_type* node_map; + replace_allocator* pocs_node_map; + set_type* set; replace_allocator* pocs_set; + node_set_type* node_set; + replace_allocator* pocs_node_set; + } // namespace // clang-format off UNORDERED_TEST( swap_tests, - ((map)(pocs_map)(set)(pocs_set)) + ((map)(pocs_map)(node_map)(pocs_node_map) + (set)(pocs_set)(node_set)(pocs_node_set)) ((member_fn_swap)(free_fn_swap)) ((value_type_generator_factory)) ((default_generator)(sequential)(limited_range))) UNORDERED_TEST(insert_and_swap, - ((map)(set)) + ((map)(node_map)(set)(node_set)) ((member_fn_swap)(free_fn_swap)) ((value_type_generator_factory)) ((default_generator)(sequential)(limited_range))) diff --git a/test/cfoa/try_emplace_tests.cpp b/test/cfoa/try_emplace_tests.cpp index fa8ca5e6..e7628f3a 100644 --- a/test/cfoa/try_emplace_tests.cpp +++ b/test/cfoa/try_emplace_tests.cpp @@ -1,10 +1,12 @@ // Copyright (C) 2023 Christian Mazakas +// Copyright (C) 2024 Joaquin M Lopez Munoz // Distributed under the Boost Software License, Version 1.0. (See accompanying // file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt) #include "helpers.hpp" #include +#include #include @@ -161,8 +163,15 @@ namespace { BOOST_TEST_EQ(raii::default_constructor, x.size()); BOOST_TEST_EQ(raii::copy_constructor, x.size()); - // don't check move construction count here because of rehashing - BOOST_TEST_GT(raii::move_constructor, 0u); + + if (is_container_node_based::value) { + BOOST_TEST_EQ(raii::move_constructor, 0u); + } + else{ + // don't check move construction count here because of rehashing + BOOST_TEST_GT(raii::move_constructor, 0u); + } + BOOST_TEST_EQ(raii::move_assignment, 0u); BOOST_TEST_EQ(raii::copy_assignment, 0u); } @@ -194,8 +203,15 @@ namespace { BOOST_TEST_EQ(raii::default_constructor, x.size()); BOOST_TEST_EQ(raii::copy_constructor, x.size()); - // don't check move construction count here because of rehashing - BOOST_TEST_GT(raii::move_constructor, 0u); + + if (is_container_node_based::value) { + BOOST_TEST_EQ(raii::move_constructor, 0u); + } + else{ + // don't check move construction count here because of rehashing + BOOST_TEST_GT(raii::move_constructor, 0u); + } + BOOST_TEST_EQ(raii::move_assignment, 0u); BOOST_TEST_EQ(raii::copy_assignment, 0u); } @@ -229,7 +245,12 @@ namespace { if (std::is_same::value) { BOOST_TEST_EQ(raii::copy_constructor, x.size()); - BOOST_TEST_GE(raii::move_constructor, x.size()); + if (is_container_node_based::value) { + BOOST_TEST_EQ(raii::move_constructor, 0u); + } + else{ + BOOST_TEST_GE(raii::move_constructor, x.size()); + } } else { BOOST_TEST_EQ(raii::copy_constructor, 0u); BOOST_TEST_GE(raii::move_constructor, x.size()); @@ -264,7 +285,12 @@ namespace { BOOST_TEST_EQ(raii::default_constructor, x.size()); if (std::is_same::value) { BOOST_TEST_EQ(raii::copy_constructor, x.size()); - BOOST_TEST_GE(raii::move_constructor, x.size()); + if (is_container_node_based::value) { + BOOST_TEST_EQ(raii::move_constructor, 0u); + } + else{ + BOOST_TEST_GE(raii::move_constructor, x.size()); + } } else { BOOST_TEST_EQ(raii::copy_constructor, 0u); BOOST_TEST_GE(raii::move_constructor, x.size()); @@ -367,6 +393,10 @@ namespace { boost::unordered::concurrent_flat_map* transp_map; + boost::unordered::concurrent_node_map* node_map; + boost::unordered::concurrent_node_map* transp_node_map; + } // namespace using test::default_generator; @@ -379,7 +409,7 @@ value_generator > init_type_generator; // clang-format off UNORDERED_TEST( try_emplace, - ((map)) + ((map)(node_map)) ((value_type_generator)(init_type_generator)) ((lvalue_try_emplacer)(norehash_lvalue_try_emplacer) (rvalue_try_emplacer)(norehash_rvalue_try_emplacer) @@ -389,7 +419,7 @@ UNORDERED_TEST( UNORDERED_TEST( try_emplace, - ((transp_map)) + ((transp_map)(transp_node_map)) ((init_type_generator)) ((transp_try_emplace)(norehash_transp_try_emplace) (transp_try_emplace_or_cvisit)(transp_try_emplace_or_visit)) diff --git a/test/cfoa/visit_tests.cpp b/test/cfoa/visit_tests.cpp index 02bcea32..d10cd390 100644 --- a/test/cfoa/visit_tests.cpp +++ b/test/cfoa/visit_tests.cpp @@ -17,6 +17,8 @@ #include #include +#include +#include #include #include @@ -971,9 +973,15 @@ namespace { boost::unordered::concurrent_flat_map* map; boost::unordered::concurrent_flat_map* transp_map; + boost::unordered::concurrent_node_map* node_map; + boost::unordered::concurrent_node_map* transp_node_map; boost::unordered::concurrent_flat_set* set; boost::unordered::concurrent_flat_set* transp_set; + boost::unordered::concurrent_node_set* node_set; + boost::unordered::concurrent_node_set* transp_node_set; struct mutable_pair { @@ -1114,7 +1122,9 @@ namespace { } boost::concurrent_flat_set< - mutable_pair, mutable_pair_hash, mutable_pair_equal_to>* mutable_set; + mutable_pair, mutable_pair_hash, mutable_pair_equal_to>* mutable_set; + boost::concurrent_node_set< + mutable_pair, mutable_pair_hash, mutable_pair_equal_to>* mutable_node_set; } // namespace @@ -1126,7 +1136,7 @@ using test::sequential; UNORDERED_TEST( visit, - ((map)(set)) + ((map)(node_map)(set)(node_set)) ((value_type_generator_factory)(init_type_generator_factory)) ((lvalue_visitor)(visit_all)(visit_while)(exec_policy_visit_all) (exec_policy_visit_while)) @@ -1134,28 +1144,29 @@ UNORDERED_TEST( UNORDERED_TEST( visit, - ((transp_map)(transp_set)) + ((transp_map)(transp_node_map)(transp_set)(transp_node_set)) ((value_type_generator_factory)(init_type_generator_factory)) ((transp_visitor)) ((default_generator)(sequential)(limited_range))) UNORDERED_TEST( empty_visit, - ((map)(transp_map)(set)(transp_set)) + ((map)(transp_map)(node_map)(transp_node_map) + (set)(transp_set)(node_set)(transp_node_set)) ((value_type_generator_factory)(init_type_generator_factory)) ((default_generator)(sequential)(limited_range)) ) UNORDERED_TEST( insert_and_visit, - ((map)(set)) + ((map)(node_map)(set)(node_set)) ((value_type_generator_factory)) ((sequential)) ) UNORDERED_TEST( bulk_visit, - ((map)(set)) + ((map)(node_map)(set)(node_set)) ((regular_key_extract)) ((value_type_generator_factory)) ((sequential)) @@ -1163,7 +1174,7 @@ UNORDERED_TEST( UNORDERED_TEST( bulk_visit, - ((transp_map)(transp_set)) + ((transp_map)(transp_node_map)(transp_set)(transp_node_set)) ((transp_key_extract)) ((value_type_generator_factory)) ((sequential)) @@ -1173,7 +1184,7 @@ UNORDERED_TEST( UNORDERED_TEST( exclusive_access_set_visit, - ((mutable_set)) + ((mutable_set)(mutable_node_set)) ) // clang-format on diff --git a/test/debuggability/visualization_tests.cpp b/test/debuggability/visualization_tests.cpp index 0a7e6204..7e009277 100644 --- a/test/debuggability/visualization_tests.cpp +++ b/test/debuggability/visualization_tests.cpp @@ -20,6 +20,8 @@ #include #include #include +#include +#include #include #include #include @@ -61,8 +63,12 @@ template void visualization_test(Tester& tester) auto cfoa_flat_map_ptr = tester.template construct_map(); auto cfoa_flat_set_ptr = tester.template construct_set(); + auto cfoa_node_map_ptr = tester.template construct_map(); + auto cfoa_node_set_ptr = tester.template construct_set(); auto& cfoa_flat_map = *cfoa_flat_map_ptr; auto& cfoa_flat_set = *cfoa_flat_set_ptr; + auto& cfoa_node_map = *cfoa_node_map_ptr; + auto& cfoa_node_set = *cfoa_node_set_ptr; // clang-format on for (int i = 0; i < 5; ++i) { @@ -75,6 +81,7 @@ template void visualization_test(Tester& tester) foa_flat_map.emplace(str, num); foa_node_map.emplace(str, num); cfoa_flat_map.emplace(str, num); + cfoa_node_map.emplace(str, num); fca_set.emplace(str); fca_multiset.emplace(str); @@ -82,6 +89,7 @@ template void visualization_test(Tester& tester) foa_flat_set.emplace(str); foa_node_set.emplace(str); cfoa_flat_set.emplace(str); + cfoa_node_set.emplace(str); } auto fca_map_begin = fca_map.begin(); @@ -102,7 +110,7 @@ template void visualization_test(Tester& tester) auto foa_node_set_begin = foa_node_set.begin(); auto foa_node_set_end = foa_node_set.end(); - use(cfoa_flat_map, cfoa_flat_set); + use(cfoa_flat_map, cfoa_flat_set, cfoa_node_map, cfoa_node_set); use(fca_map_begin, fca_map_end, fca_multimap_begin, fca_multimap_end, fca_set_begin, fca_set_end, fca_multiset_begin, fca_multiset_end); use(foa_flat_map_begin, foa_flat_map_end, foa_flat_set_begin, diff --git a/test/unordered/explicit_alloc_ctor_tests.cpp b/test/unordered/explicit_alloc_ctor_tests.cpp index 9b6431d9..31d3a0db 100644 --- a/test/unordered/explicit_alloc_ctor_tests.cpp +++ b/test/unordered/explicit_alloc_ctor_tests.cpp @@ -1,14 +1,17 @@ -// Copyright 2024 Joaquin M Lopez Muoz. +// Copyright 2024 Joaquin M Lopez Munoz. // Distributed under the Boost Software License, Version 1.0. (See accompanying -// file LICENSE_1_0.txt or copy at htT://www.boost.org/LICENSE_1_0.txt) +// file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt) #ifdef BOOST_UNORDERED_CFOA_TESTS #include #include +#include +#include #else #include "../helpers/unordered.hpp" #endif +#include "../helpers/helpers.hpp" #include "../helpers/test.hpp" #include #include @@ -62,7 +65,16 @@ template void test_explicit_alloc_ctor_extract( template void test_explicit_alloc_ctor_extract( Container& c, std::true_type) { +#ifdef BOOST_UNORDERED_CFOA_TESTS + typename Container::key_type k; + c.cvisit_while([&](typename Container::value_type const & x) { + k = test::get_key(x); + return false; + }); + auto n = c.extract(k); +#else auto n = c.extract(c.begin()); +#endif c.insert(std::move(n)); n = c.extract(typename Container::key_type()); c.insert(std::move(n)); @@ -122,8 +134,13 @@ UNORDERED_AUTO_TEST (explicit_alloc_ctor) { test_explicit_alloc_ctor, std::equal_to, explicit_allocator > > >(); + test_explicit_alloc_ctor, std::equal_to, + explicit_allocator > > >(); test_explicit_alloc_ctor, std::equal_to, explicit_allocator > >(); + test_explicit_alloc_ctor, std::equal_to, explicit_allocator > >(); #elif defined(BOOST_UNORDERED_FOA_TESTS) test_explicit_alloc_ctor, std::equal_to, diff --git a/test/unordered/node_handle_allocator_tests.cpp b/test/unordered/node_handle_allocator_tests.cpp new file mode 100644 index 00000000..d19a703e --- /dev/null +++ b/test/unordered/node_handle_allocator_tests.cpp @@ -0,0 +1,213 @@ +// Copyright (C) 2024 Joaquin M Lopez Munoz +// Distributed under the Boost Software License, Version 1.0. (See accompanying +// file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt) + +#include + +#if defined(BOOST_GCC) +// Spurious maybe-uninitialized warnings with allocators contained +// in node handles. +// Maybe related to https://gcc.gnu.org/bugzilla/show_bug.cgi?id=108230 +#pragma GCC diagnostic push +#pragma GCC diagnostic ignored "-Wmaybe-uninitialized" +#endif + +#ifdef BOOST_UNORDERED_CFOA_TESTS +#include +#include +#else +#include "../helpers/unordered.hpp" +#endif + +#include "../helpers/test.hpp" + +#include +#include +#include +#include + + +namespace { + template struct nonassignable_allocator + { + using value_type = T; + + nonassignable_allocator() = default; + nonassignable_allocator(nonassignable_allocator const&) = default; + + template + nonassignable_allocator(nonassignable_allocator const&) {} + + nonassignable_allocator& operator=(nonassignable_allocator const&) = delete; + + T* allocate(std::size_t n) + { + return static_cast(::operator new(n * sizeof(T))); + } + + void deallocate(T* p, std::size_t) { ::operator delete(p); } + + bool operator==(nonassignable_allocator const&) const { return true; } + bool operator!=(nonassignable_allocator const&) const { return false; } + }; + + template struct pocx_allocator + { + int x_; + + using value_type = T; + using propagate_on_container_copy_assignment = std::true_type; + using propagate_on_container_move_assignment = std::true_type; + using propagate_on_container_swap = std::true_type; + + pocx_allocator() : x_{-1} {} + pocx_allocator(pocx_allocator const&) = default; + pocx_allocator(int const x) : x_{x} {} + + template + pocx_allocator(pocx_allocator const& rhs) : x_{rhs.x_} + { + } + + pocx_allocator& operator=(pocx_allocator const&) = default; + + T* allocate(std::size_t n) + { + return static_cast(::operator new(n * sizeof(T))); + } + + void deallocate(T* p, std::size_t) { ::operator delete(p); } + + bool operator==(pocx_allocator const& rhs) const { return x_ == rhs.x_; } + bool operator!=(pocx_allocator const& rhs) const { return x_ != rhs.x_; } + }; + + template + struct replace_allocator_impl; + + template + using replace_allocator = + typename replace_allocator_impl::type; + + template < + typename K, typename H, typename P, typename A, + template class Set, + typename Allocator + > + struct replace_allocator_impl, Allocator> + { + using type = Set< + K, H, P, boost::allocator_rebind_t >; + }; + + template < + typename K, typename H, typename T, typename P, typename A, + template class Map, + typename Allocator + > + struct replace_allocator_impl, Allocator> + { + using type = Map< + K, T, H, P, + boost::allocator_rebind_t > >; + }; + + template + void node_handle_allocator_tests( + X*, std::pair allocators) + { + using value_type = typename X::value_type; + using replaced_allocator_container = replace_allocator; + using node_type = typename replaced_allocator_container::node_type; + + replaced_allocator_container x1(allocators.first); + node_type nh; + + x1.emplace(value_type()); + nh = x1.extract(0); + + BOOST_TEST(!nh.empty()); + BOOST_TEST(nh.get_allocator() == x1.get_allocator()); + + replaced_allocator_container x2(allocators.second); + + x2.emplace(value_type()); + nh = x2.extract(0); + + BOOST_TEST(!nh.empty()); + BOOST_TEST(nh.get_allocator() == x2.get_allocator()); + } + + template + void node_handle_allocator_swap_tests( + X*, std::pair allocators) + { + using value_type = typename X::value_type; + using replaced_allocator_container = replace_allocator; + using node_type = typename replaced_allocator_container::node_type; + + replaced_allocator_container x1(allocators.first), x2(allocators.second); + x1.emplace(value_type()); + x2.emplace(value_type()); + + node_type nh1, nh2; + + nh1 = x1.extract(0); + swap(nh1, nh2); + + BOOST_TEST(nh1.empty()); + BOOST_TEST(!nh2.empty()); + BOOST_TEST(nh2.get_allocator() == x1.get_allocator()); + + nh1 = x2.extract(0); + swap(nh1, nh2); + + BOOST_TEST(!nh1.empty()); + BOOST_TEST(nh1.get_allocator() == x1.get_allocator()); + BOOST_TEST(!nh2.empty()); + BOOST_TEST(nh2.get_allocator() == x2.get_allocator()); + } + +#if BOOST_WORKAROUND(BOOST_MSVC, <= 1900) +#pragma warning(push) +#pragma warning(disable : 4592) // symbol will be dynamically initialized +#endif + + std::pair< + std::allocator, std::allocator > test_std_allocators({},{}); + std::pair< + nonassignable_allocator, + nonassignable_allocator > test_nonassignable_allocators({},{}); + std::pair< + pocx_allocator, pocx_allocator > test_pocx_allocators(5,6); + +#if BOOST_WORKAROUND(BOOST_MSVC, <= 1900) +#pragma warning(pop) // C4592 +#endif + +#if defined(BOOST_UNORDERED_FOA_TESTS) + boost::unordered_node_map* test_map; + boost::unordered_node_set* test_set; +#elif defined(BOOST_UNORDERED_CFOA_TESTS) + boost::concurrent_node_map* test_map; + boost::concurrent_node_set* test_set; +#else + boost::unordered_map* test_map; + boost::unordered_set* test_set; +#endif +} // namespace + +// clang-format off +UNORDERED_TEST( + node_handle_allocator_tests, + ((test_map)(test_set)) + ((test_std_allocators)(test_nonassignable_allocators) + (test_pocx_allocators))) + +UNORDERED_TEST( + node_handle_allocator_swap_tests, + ((test_map)(test_set)) + ((test_std_allocators)(test_pocx_allocators))) +// clang-format on + +RUN_TESTS() diff --git a/test/unordered/pmr_allocator_tests.cpp b/test/unordered/pmr_allocator_tests.cpp index 44e1ca06..07a3db02 100644 --- a/test/unordered/pmr_allocator_tests.cpp +++ b/test/unordered/pmr_allocator_tests.cpp @@ -33,12 +33,23 @@ namespace pmr_allocator_tests { test_string_flat_map; static boost::unordered::pmr::concurrent_flat_map* test_pmr_string_flat_map; + static boost::unordered::pmr::concurrent_node_map* + test_string_node_map; + static boost::unordered::pmr::concurrent_node_map* + test_pmr_string_node_map; static boost::unordered::pmr::concurrent_flat_set* test_string_flat_set; static boost::unordered::pmr::concurrent_flat_set* test_pmr_string_flat_set; + static boost::unordered::pmr::concurrent_node_set* + test_string_node_set; + static boost::unordered::pmr::concurrent_node_set* + test_pmr_string_node_set; #define PMR_ALLOCATOR_TESTS_ARGS \ - ((test_string_flat_map)(test_pmr_string_flat_map)(test_string_flat_set)(test_pmr_string_flat_set)) + ((test_string_flat_map)(test_pmr_string_flat_map) \ + (test_string_node_map)(test_pmr_string_node_map) \ + (test_string_flat_set)(test_pmr_string_flat_set) \ + (test_string_node_set)(test_pmr_string_node_set)) #elif defined(BOOST_UNORDERED_FOA_TESTS) static boost::unordered::pmr::unordered_flat_map* test_string_flat_map; diff --git a/test/unordered/stats_tests.cpp b/test/unordered/stats_tests.cpp index 351f07d6..67cd0a92 100644 --- a/test/unordered/stats_tests.cpp +++ b/test/unordered/stats_tests.cpp @@ -1,4 +1,4 @@ -// Copyright 2024 Joaquin M Lopez Muoz. +// Copyright 2024 Joaquin M Lopez Munoz. // Distributed under the Boost Software License, Version 1.0. (See accompanying // file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt) @@ -7,8 +7,12 @@ #ifdef BOOST_UNORDERED_CFOA_TESTS #include #include +#include +#include #include #include +#include +#include #include "../cfoa/helpers.hpp" #else #include "../helpers/unordered.hpp" @@ -19,6 +23,7 @@ #include "../helpers/test.hpp" #include #include +#include #include template struct unequal_allocator @@ -45,16 +50,20 @@ template struct unequal_allocator int n_; }; -bool exact_same(double x, double y) +bool esentially_same(double x, double y) { - return std::memcmp( - reinterpret_cast(&x), reinterpret_cast(&y), - sizeof(double))==0; + // Some optimizer-related issues in GCC X86 result in last-bit differences + // on doubles that should otherwise be identical. + + // https://stackoverflow.com/a/253874/213114 + + static constexpr double epsilon = 1.0E-6; + return fabs(x - y) <= ( (fabs(x) > fabs(y) ? fabs(x) : fabs(y)) * epsilon); } -bool not_exact_same(double x, double y) +bool not_esentially_same(double x, double y) { - return !exact_same(x, y); + return !esentially_same(x, y); } enum check_stats_contition @@ -69,19 +78,19 @@ void check_stat(const Stats& s, check_stats_contition cond) { switch (cond) { case stats_empty: - BOOST_TEST(exact_same(s.average, 0.0)); - BOOST_TEST(exact_same(s.variance, 0.0)); - BOOST_TEST(exact_same(s.deviation, 0.0)); + BOOST_TEST(esentially_same(s.average, 0.0)); + BOOST_TEST(esentially_same(s.variance, 0.0)); + BOOST_TEST(esentially_same(s.deviation, 0.0)); break; case stats_full: BOOST_TEST_GT(s.average, 0.0); - if(not_exact_same(s.variance, 0.0)) { + if(not_esentially_same(s.variance, 0.0)) { BOOST_TEST_GT(s.variance, 0.0); BOOST_TEST_GT(s.deviation, 0.0); } break; case stats_mostly_full: - if(not_exact_same(s.variance, 0.0)) { + if(not_esentially_same(s.variance, 0.0)) { BOOST_TEST_GT(s.average, 0.0); BOOST_TEST_GT(s.variance, 0.0); BOOST_TEST_GT(s.deviation, 0.0); @@ -94,9 +103,9 @@ void check_stat(const Stats& s, check_stats_contition cond) template void check_stat(const Stats& s1, const Stats& s2) { - BOOST_TEST(exact_same(s1.average, s2.average)); - BOOST_TEST(exact_same(s1.variance, s2.variance)); - BOOST_TEST(exact_same(s1.deviation, s2.deviation)); + BOOST_TEST(esentially_same(s1.average, s2.average)); + BOOST_TEST(esentially_same(s1.variance, s2.variance)); + BOOST_TEST(esentially_same(s1.deviation, s2.deviation)); } template @@ -345,15 +354,28 @@ UNORDERED_AUTO_TEST (stats_) { boost::concurrent_flat_map< int, int, boost::hash, std::equal_to, unequal_allocator< std::pair< const int, int> >>>(); + test_stats< + boost::concurrent_node_map< + int, int, boost::hash, std::equal_to, + unequal_allocator< std::pair< const int, int> >>>(); test_stats< boost::concurrent_flat_set< int, boost::hash, std::equal_to, unequal_allocator>>(); + test_stats< + boost::concurrent_node_set< + int, boost::hash, std::equal_to, unequal_allocator>>(); test_stats_concurrent_unordered_interop< boost::unordered_flat_map, boost::concurrent_flat_map>(); + test_stats_concurrent_unordered_interop< + boost::unordered_node_map, + boost::concurrent_node_map>(); test_stats_concurrent_unordered_interop< boost::unordered_flat_set, boost::concurrent_flat_set>(); + test_stats_concurrent_unordered_interop< + boost::unordered_node_set, + boost::concurrent_node_set>(); #elif defined(BOOST_UNORDERED_FOA_TESTS) test_stats< boost::unordered_flat_map<