Facebook's std::vector optimization

来源:互联网 发布:淘宝一键抢拍神器 编辑:程序博客网 时间:2024/06/02 06:10

folly/FBvector.h

Simply replacing std::vector with folly::fbvector (afterhaving included thefolly/FBVector.h header file) willimprove the performance of your C++ code using vectors withcommon coding patterns. The improvements are always non-negative,almost always measurable, frequently significant, sometimesdramatic, and occasionally spectacular.

Sample


folly::fbvector<int> numbers({0, 1, 2, 3});numbers.reserve(10);for (int i = 4; i < 10; i++) {  numbers.push_back(i * 2);}assert(numbers[6] == 12);

Motivation


std::vector is the stalwart abstraction many use fordynamically-allocated arrays in C++. It is also the best knownand most used of all containers. It may therefore seem asurprise thatstd::vector leaves important - and sometimesvital - efficiency opportunities on the table. This documentexplains how our own drop-in abstractionfbvector improves keyperformance aspects of std::vector. Refer tofolly/test/FBVectorTest.cpp for a few benchmarks.

Memory Handling


It is well known that std::vector grows exponentially (at aconstant factor) in order to avoid quadratic growth performance.The trick is choosing a good factor (any factor greater than 1ensures O(1) amortized append complexity towards infinity). Afactor that's too small causes frequent vector reallocation; onethat's too large forces the vector to consume much more memorythan needed. The initial HP implementation by Stepanov used agrowth factor of 2, i.e. whenever you'dpush_back into a vectorwithout there being room, it would double the current capacity.

With time, other compilers reduced the growth factor to 1.5, butgcc has staunchly used a growth factor of 2. In fact it can bemathematically proven that a growth factor of 2 is rigorously theworst possible because it never allows the vector to reuseany of its previously-allocated memory. That makes the vector cache-unfriendly and memory manager unfriendly.

To see why that's the case, consider a large vector of capacity Cresiding somewhere at the beginning of an initially unoccupiedchunk. When the request for growth comes about, the vector(assuming no in-place resizing, see the appropriate section inthis document) will allocate a chunk next to its current chunk,copy its existing data, and then deallocate the old chunk. So nowwe have a chunk of size C followed by a chunk of size k * C.Continuing this process we'll then have a chunk of size k * k * Cto the right and so on. That leads to a series of the form (using^^ for power):

C, C*k,  C*k^^2, C*k^^3, ...

If we choose k = 2 we know that every element in the series willbe strictly larger than the sum of all previous ones because ofthe remarkable equality:

1 + 2^^1 + 2^^2 + 2^^3... + 2^^n = 2^^(n+1) - 1

What that really means is that the new request for a chunk willbe never satisfiable by coalescing all previously-used chunks.This is not quite what you'd want.

We would of course want the vector to not crawl forward inmemory, but instead to move back to its previously-allocatedchunks. Any number smaller than 2 guarantees that you'll be ableat some point to reuse the previous chunks. Going through themath reveals the equation:

k^^n <= 1 + k + k^^2 + ... + k^^(n-2)

If some number n satisfies that equation, it means you can reusememory after n reallocations. The graphical solver below revealsthat choosing k = 1.5 (blue line) allows memory reuse after 4reallocations, choosing k = 1.45 (red line) allows memory reuseafter 3 reallocations, and choosing k = 1.3 (black line) allowsreuse after only 2 reallocations.

graphical solutions

Of course, the above makes a number of simplifying assumptionsabout how the memory allocator works, but definitely you don'twant to choose the theoretically absolute worst growth factor.fbvector uses a growth factor of 1.5. That does not impede goodperformance at small sizes because of the way fbvectorcooperates with jemalloc (below).

The jemalloc Connection


Virtually all modern allocators allocate memory in fixed-sizequanta that are chosen to minimize management overhead while atthe same time offering good coverage at low slack. For example, anallocator may choose blocks of doubling size (32, 64, 128,, ...) up to 4096, and then blocks of size multiples of apage up until 1MB, and then 512KB increments and so on.

As discussed above, std::vector also needs to (re)allocate inquanta. The next quantum is usually defined in terms of thecurrent size times the infamous growth constant. Because of thissetup,std::vector has some slack memory at the end much likean allocated block has some slack memory at the end.

It doesn't take a rocket surgeon to figure out that an allocator-aware std::vector would be a marriage made in heaven: thevector could directly request blocks of "perfect" size from theallocator so there would be virtually no slack in the allocator.Also, the entire growth strategy could be adjusted to workperfectly with allocator's own block growth strategy. That'sexactly whatfbvector does - it automatically detects the useof jemalloc and adjusts its reallocation strategy accordingly.

But wait, there's more. Many memory allocators do not support in-place reallocation, although most of them could. This comes fromthe now notorious design ofrealloc() to opaquely performeither in-place reallocation or an allocate-memcpy-deallocatecycle. Such lack of control subsequently forced all clib-basedallocator designs to avoid in-place reallocation, and thatincludes C++'snew and std:allocator. This is a major loss ofefficiency because an in-place reallocation, being very cheap,may mean a much less aggressive growth strategy. In turn thatmeans less slack memory and faster reallocations.

Object Relocation


One particularly sensitive topic about handling C++ values isthat they are all conservatively considerednon-relocatable. In contrast, a relocatable value would preserveits invariant even if its bits were moved arbitrarily in memory.For example, anint32 is relocatable because moving its 4 byteswould preserve its actual value, so the address of that valuedoes not "matter" to its integrity.

C++'s assumption of non-relocatable values hurts everybody forthe benefit of a few questionable designs. The issue is thatmoving a C++ object "by the book" entails (a) creating a new copyfrom the existing value; (b) destroying the old value. This isquite vexing and violates common sense; consider thishypothetical conversation between Captain Picard and anincredulous alien:

Incredulous Alien: "So, this teleporter, how does it work?"
Picard: "It beams people and arbitrary matter from one place toanother."
Incredulous Alien: "Hmmm... is it safe?"
Picard: "Yes, but earlier models were a hassle. They'd clone theperson to another location. Then the teleporting chief would haveto shoot the original. Ask O'Brien, he was an intern during thosetimes. A bloody mess, that's what it was."

Only a tiny minority of objects are genuinely non-relocatable:

  • Objects that use internal pointers, e.g.:

    class Ew { char buffer[1024]; char * pointerInsideBuffer;public: Ew() : pointerInsideBuffer(buffer) {} ...}

  • Objects that need to update "observers" that store pointers to them.

The first class of designs can always be redone at small or nocost in efficiency. The second class of objects should not bevalues in the first place - they should be allocated withnewand manipulated using (smart) pointers. It is highly unusual fora value to have observers that alias pointers to it.

Relocatable objects are of high interest to std::vector becausesuch knowledge makes insertion into the vector and vectorreallocation considerably faster: instead of going to Picard'scopy-destroy cycle, relocatable objects can be moved aroundsimply by using memcpy or memmove. This optimization canyield arbitrarily high wins in efficiency; for example, ittransformsvector< vector<double> > or vector< hash_map<int,string> > from risky liabilities into highly workablecompositions.

In order to allow fast relocation without risk, fbvector uses atraitfolly::IsRelocatable defined in "folly/Traits.h". By default,folly::IsRelocatable::value conservatively yields false. Ifyou know that your typeWidget is in fact relocatable, go rightafter Widget's definition and write this:

// at global namespace levelnamespace folly {  struct IsRelocatable<Widget> : boost::true_type {};}

If you don't do this, fbvector<Widget> will fail to compilewith aBOOST_STATIC_ASSERT.

Additional Constraints

Similar improvements are possible in presence of a "simple" type

  • more specifically, one that has a trivial assignment (i.e.assignment is the same as bitblitting the bits over) or a nothrowdefault constructor. These traits are used gainfully byfbvector in a variety of places. Fortunately, these traits arealready present in the C++ standard (well, currently in Boost).To summarize, in order to work withfbvector, a type Widgetmust pass:

    BOOST_STATIC_ASSERT( IsRelocatable::value && (boost::has_trivial_assign::value || boost::has_nothrow_constructor::value));

These traits go hand in hand; for example, it would be verydifficult to design a class that satisfies one branch of theconjunction above but not the other.fbvector uses these simpleconstraints to minimize the number of copies made on many commonoperations such aspush_back, insert, or resize.

To make it easy for you to state assumptions about a given typeor family of parameterized types, check Traits.h and inparticular handy family of macros FOLLY_ASSUME_FBVECTOR_COMPATIBLE*.

Miscellaneous


fbvector uses a careful implementation all around to makesure it doesn't lose efficiency through the cracks. Some futuredirections may be in improving raw memory copying (memcpy isnot an intrinsic in gcc and does not work terribly well forlarge chunks) and in furthering the collaboration withjemalloc. Have fun!


原文链接:https://github.com/facebook/folly/blob/master/folly/docs/FBVector.md


附件:FBVector.h


/** Copyright 2014 Facebook, Inc.** Licensed under the Apache License, Version 2.0 (the "License");* you may not use this file except in compliance with the License.* You may obtain a copy of the License at** http://www.apache.org/licenses/LICENSE-2.0** Unless required by applicable law or agreed to in writing, software* distributed under the License is distributed on an "AS IS" BASIS,* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.* See the License for the specific language governing permissions and* limitations under the License.*//** Nicholas Ormrod (njormrod)* Andrei Alexandrescu (aalexandre)** FBVector is Facebook's drop-in implementation of std::vector. It has special* optimizations for use with relocatable types and jemalloc.*/#ifndef FOLLY_FBVECTOR_H#define FOLLY_FBVECTOR_H//=============================================================================// headers#include <algorithm>#include <cassert>#include <iterator>#include <memory>#include <stdexcept>#include <type_traits>#include <utility>#include <folly/Likely.h>#include <folly/Malloc.h>#include <folly/Traits.h>#include <boost/operators.hpp>// some files expected these from FBVector#include <limits>#include <folly/Foreach.h>#include <boost/type_traits.hpp>#include <boost/utility/enable_if.hpp>//=============================================================================// forward declarationnamespace folly {template <class T, class Allocator = std::allocator<T>>class fbvector;}//=============================================================================// unrolling#define FOLLY_FBV_UNROLL_PTR(first, last, OP) do { \for (; (last) - (first) >= 4; (first) += 4) { \OP(((first) + 0)); \OP(((first) + 1)); \OP(((first) + 2)); \OP(((first) + 3)); \} \for (; (first) != (last); ++(first)) OP((first)); \} while(0);//=============================================================================///////////////////////////////////////////////////////////////////////////////// //// fbvector class //// /////////////////////////////////////////////////////////////////////////////////namespace folly {template <class T, class Allocator>class fbvector : private boost::totally_ordered<fbvector<T, Allocator>> {//===========================================================================//---------------------------------------------------------------------------// implementationprivate:typedef std::allocator_traits<Allocator> A;struct Impl : public Allocator {// typedefstypedef typename A::pointer pointer;typedef typename A::size_type size_type;// datapointer b_, e_, z_;// constructorsImpl() : Allocator(), b_(nullptr), e_(nullptr), z_(nullptr) {}Impl(const Allocator& a): Allocator(a), b_(nullptr), e_(nullptr), z_(nullptr) {}Impl(Allocator&& a): Allocator(std::move(a)), b_(nullptr), e_(nullptr), z_(nullptr) {}Impl(size_type n, const Allocator& a = Allocator()): Allocator(a){ init(n); }Impl(Impl&& other): Allocator(std::move(other)),b_(other.b_), e_(other.e_), z_(other.z_){ other.b_ = other.e_ = other.z_ = nullptr; }// destructor~Impl() {destroy();}// allocation// note that 'allocate' and 'deallocate' are inherited from AllocatorT* D_allocate(size_type n) {if (usingStdAllocator::value) {return static_cast<T*>(malloc(n * sizeof(T)));} else {return std::allocator_traits<Allocator>::allocate(*this, n);}}void D_deallocate(T* p, size_type n) noexcept {if (usingStdAllocator::value) {free(p);} else {std::allocator_traits<Allocator>::deallocate(*this, p, n);}}// helpersvoid swapData(Impl& other) {std::swap(b_, other.b_);std::swap(e_, other.e_);std::swap(z_, other.z_);}// data opsinline void destroy() noexcept {if (b_) {// THIS DISPATCH CODE IS DUPLICATED IN fbvector::D_destroy_range_a.// It has been inlined here for speed. It calls the static fbvector// methods to perform the actual destruction.if (usingStdAllocator::value) {S_destroy_range(b_, e_);} else {S_destroy_range_a(*this, b_, e_);}D_deallocate(b_, z_ - b_);}}void init(size_type n) {if (UNLIKELY(n == 0)) {b_ = e_ = z_ = nullptr;} else {size_type sz = folly::goodMallocSize(n * sizeof(T)) / sizeof(T);b_ = D_allocate(sz);e_ = b_;z_ = b_ + sz;}}voidset(pointer newB, size_type newSize, size_type newCap) {z_ = newB + newCap;e_ = newB + newSize;b_ = newB;}void reset(size_type newCap) {destroy();try {init(newCap);} catch (...) {init(0);throw;}}void reset() { // same as reset(0)destroy();b_ = e_ = z_ = nullptr;}} impl_;static void swap(Impl& a, Impl& b) {using std::swap;if (!usingStdAllocator::value) swap<Allocator>(a, b);a.swapData(b);}//===========================================================================//---------------------------------------------------------------------------// types and constantspublic:typedef T value_type;typedef value_type& reference;typedef const value_type& const_reference;typedef T* iterator;typedef const T* const_iterator;typedef size_t size_type;typedef typename std::make_signed<size_type>::type difference_type;typedef Allocator allocator_type;typedef typename A::pointer pointer;typedef typename A::const_pointer const_pointer;typedef std::reverse_iterator<iterator> reverse_iterator;typedef std::reverse_iterator<const_iterator> const_reverse_iterator;private:typedef std::integral_constant<bool,boost::has_trivial_copy_constructor<T>::value &&sizeof(T) <= 16 // don't force large structures to be passed by value> should_pass_by_value;typedef typename std::conditional<should_pass_by_value::value, T, const T&>::type VT;typedef typename std::conditional<should_pass_by_value::value, T, T&&>::type MT;typedef std::integral_constant<bool,std::is_same<Allocator, std::allocator<T>>::value> usingStdAllocator;typedef std::integral_constant<bool,usingStdAllocator::value ||A::propagate_on_container_move_assignment::value> moveIsSwap;//===========================================================================//---------------------------------------------------------------------------// allocator helpersprivate://---------------------------------------------------------------------------// allocateT* M_allocate(size_type n) {return impl_.D_allocate(n);}//---------------------------------------------------------------------------// deallocatevoid M_deallocate(T* p, size_type n) noexcept {impl_.D_deallocate(p, n);}//---------------------------------------------------------------------------// construct// GCC is very sensitive to the exact way that construct is called. For// that reason there are several different specializations of construct.template <typename U, typename... Args>void M_construct(U* p, Args&&... args) {if (usingStdAllocator::value) {new (p) U(std::forward<Args>(args)...);} else {std::allocator_traits<Allocator>::construct(impl_, p, std::forward<Args>(args)...);}}template <typename U, typename... Args>static void S_construct(U* p, Args&&... args) {new (p) U(std::forward<Args>(args)...);}template <typename U, typename... Args>static void S_construct_a(Allocator& a, U* p, Args&&... args) {std::allocator_traits<Allocator>::construct(a, p, std::forward<Args>(args)...);}// scalar optimization// TODO we can expand this optimization to: default copyable and assignabletemplate <typename U, typename Enable = typenamestd::enable_if<std::is_scalar<U>::value>::type>void M_construct(U* p, U arg) {if (usingStdAllocator::value) {*p = arg;} else {std::allocator_traits<Allocator>::construct(impl_, p, arg);}}template <typename U, typename Enable = typenamestd::enable_if<std::is_scalar<U>::value>::type>static void S_construct(U* p, U arg) {*p = arg;}template <typename U, typename Enable = typenamestd::enable_if<std::is_scalar<U>::value>::type>static void S_construct_a(Allocator& a, U* p, U arg) {std::allocator_traits<Allocator>::construct(a, p, arg);}// const& optimizationtemplate <typename U, typename Enable = typenamestd::enable_if<!std::is_scalar<U>::value>::type>void M_construct(U* p, const U& value) {if (usingStdAllocator::value) {new (p) U(value);} else {std::allocator_traits<Allocator>::construct(impl_, p, value);}}template <typename U, typename Enable = typenamestd::enable_if<!std::is_scalar<U>::value>::type>static void S_construct(U* p, const U& value) {new (p) U(value);}template <typename U, typename Enable = typenamestd::enable_if<!std::is_scalar<U>::value>::type>static void S_construct_a(Allocator& a, U* p, const U& value) {std::allocator_traits<Allocator>::construct(a, p, value);}//---------------------------------------------------------------------------// destroyvoid M_destroy(T* p) noexcept {if (usingStdAllocator::value) {if (!boost::has_trivial_destructor<T>::value) p->~T();} else {std::allocator_traits<Allocator>::destroy(impl_, p);}}//===========================================================================//---------------------------------------------------------------------------// algorithmic helpersprivate://---------------------------------------------------------------------------// destroy_range// wrappersvoid M_destroy_range_e(T* pos) noexcept {D_destroy_range_a(pos, impl_.e_);impl_.e_ = pos;}// dispatch// THIS DISPATCH CODE IS DUPLICATED IN IMPL. SEE IMPL FOR DETAILS.void D_destroy_range_a(T* first, T* last) noexcept {if (usingStdAllocator::value) {S_destroy_range(first, last);} else {S_destroy_range_a(impl_, first, last);}}// allocatorstatic void S_destroy_range_a(Allocator& a, T* first, T* last) noexcept {for (; first != last; ++first)std::allocator_traits<Allocator>::destroy(a, first);}// optimizedstatic void S_destroy_range(T* first, T* last) noexcept {if (!boost::has_trivial_destructor<T>::value) {// EXPERIMENTAL DATA on fbvector<vector<int>> (where each vector<int> has// size 0).// The unrolled version seems to work faster for small to medium sized// fbvectors. It gets a 10% speedup on fbvectors of size 1024, 64, and// 16.// The simple loop version seems to work faster for large fbvectors. The// unrolled version is about 6% slower on fbvectors on size 16384.// The two methods seem tied for very large fbvectors. The unrolled// version is about 0.5% slower on size 262144.// for (; first != last; ++first) first->~T();#define FOLLY_FBV_OP(p) (p)->~T()FOLLY_FBV_UNROLL_PTR(first, last, FOLLY_FBV_OP)#undef FOLLY_FBV_OP}}//---------------------------------------------------------------------------// uninitialized_fill_n// wrappersvoid M_uninitialized_fill_n_e(size_type sz) {D_uninitialized_fill_n_a(impl_.e_, sz);impl_.e_ += sz;}void M_uninitialized_fill_n_e(size_type sz, VT value) {D_uninitialized_fill_n_a(impl_.e_, sz, value);impl_.e_ += sz;}// dispatchvoid D_uninitialized_fill_n_a(T* dest, size_type sz) {if (usingStdAllocator::value) {S_uninitialized_fill_n(dest, sz);} else {S_uninitialized_fill_n_a(impl_, dest, sz);}}void D_uninitialized_fill_n_a(T* dest, size_type sz, VT value) {if (usingStdAllocator::value) {S_uninitialized_fill_n(dest, sz, value);} else {S_uninitialized_fill_n_a(impl_, dest, sz, value);}}// allocatortemplate <typename... Args>static void S_uninitialized_fill_n_a(Allocator& a, T* dest,size_type sz, Args&&... args) {auto b = dest;auto e = dest + sz;try {for (; b != e; ++b)std::allocator_traits<Allocator>::construct(a, b,std::forward<Args>(args)...);} catch (...) {S_destroy_range_a(a, dest, b);throw;}}// optimizedstatic void S_uninitialized_fill_n(T* dest, size_type n) {if (folly::IsZeroInitializable<T>::value) {std::memset(dest, 0, sizeof(T) * n);} else {auto b = dest;auto e = dest + n;try {for (; b != e; ++b) S_construct(b);} catch (...) {--b;for (; b >= dest; --b) b->~T();throw;}}}static void S_uninitialized_fill_n(T* dest, size_type n, const T& value) {auto b = dest;auto e = dest + n;try {for (; b != e; ++b) S_construct(b, value);} catch (...) {S_destroy_range(dest, b);throw;}}//---------------------------------------------------------------------------// uninitialized_copy// it is possible to add an optimization for the case where// It = move(T*) and IsRelocatable<T> and Is0Initiailizable<T>// wrapperstemplate <typename It>void M_uninitialized_copy_e(It first, It last) {D_uninitialized_copy_a(impl_.e_, first, last);impl_.e_ += std::distance(first, last);}template <typename It>void M_uninitialized_move_e(It first, It last) {D_uninitialized_move_a(impl_.e_, first, last);impl_.e_ += std::distance(first, last);}// dispatchtemplate <typename It>void D_uninitialized_copy_a(T* dest, It first, It last) {if (usingStdAllocator::value) {if (folly::IsTriviallyCopyable<T>::value) {S_uninitialized_copy_bits(dest, first, last);} else {S_uninitialized_copy(dest, first, last);}} else {S_uninitialized_copy_a(impl_, dest, first, last);}}template <typename It>void D_uninitialized_move_a(T* dest, It first, It last) {D_uninitialized_copy_a(dest,std::make_move_iterator(first), std::make_move_iterator(last));}// allocatortemplate <typename It>static voidS_uninitialized_copy_a(Allocator& a, T* dest, It first, It last) {auto b = dest;try {for (; first != last; ++first, ++b)std::allocator_traits<Allocator>::construct(a, b, *first);} catch (...) {S_destroy_range_a(a, dest, b);throw;}}// optimizedtemplate <typename It>static void S_uninitialized_copy(T* dest, It first, It last) {auto b = dest;try {for (; first != last; ++first, ++b)S_construct(b, *first);} catch (...) {S_destroy_range(dest, b);throw;}}static voidS_uninitialized_copy_bits(T* dest, const T* first, const T* last) {std::memcpy(dest, first, (last - first) * sizeof(T));}static voidS_uninitialized_copy_bits(T* dest, std::move_iterator<T*> first,std::move_iterator<T*> last) {T* bFirst = first.base();T* bLast = last.base();std::memcpy(dest, bFirst, (bLast - bFirst) * sizeof(T));}template <typename It>static voidS_uninitialized_copy_bits(T* dest, It first, It last) {S_uninitialized_copy(dest, first, last);}//---------------------------------------------------------------------------// copy_n// This function is "unsafe": it assumes that the iterator can be advanced at// least n times. However, as a private function, that unsafety is managed// wholly by fbvector itself.template <typename It>static It S_copy_n(T* dest, It first, size_type n) {auto e = dest + n;for (; dest != e; ++dest, ++first) *dest = *first;return first;}static const T* S_copy_n(T* dest, const T* first, size_type n) {if (folly::IsTriviallyCopyable<T>::value) {std::memcpy(dest, first, n * sizeof(T));return first + n;} else {return S_copy_n<const T*>(dest, first, n);}}static std::move_iterator<T*>S_copy_n(T* dest, std::move_iterator<T*> mIt, size_type n) {if (folly::IsTriviallyCopyable<T>::value) {T* first = mIt.base();std::memcpy(dest, first, n * sizeof(T));return std::make_move_iterator(first + n);} else {return S_copy_n<std::move_iterator<T*>>(dest, mIt, n);}}//===========================================================================//---------------------------------------------------------------------------// relocation helpersprivate:// Relocation is divided into three parts://// 1: relocate_move// Performs the actual movement of data from point a to point b.//// 2: relocate_done// Destroys the old data.//// 3: relocate_undo// Destoys the new data and restores the old data.//// The three steps are used because there may be an exception after part 1// has completed. If that is the case, then relocate_undo can nullify the// initial move. Otherwise, relocate_done performs the last bit of tidying// up.//// The relocation trio may use either memcpy, move, or copy. It is decided// by the following case statement://// IsRelocatable && usingStdAllocator -> memcpy// has_nothrow_move && usingStdAllocator -> move// cannot copy -> move// default -> copy//// If the class is non-copyable then it must be movable. However, if the// move constructor is not noexcept, i.e. an error could be thrown, then// relocate_undo will be unable to restore the old data, for fear of a// second exception being thrown. This is a known and unavoidable// deficiency. In lieu of a strong exception guarantee, relocate_undo does// the next best thing: it provides a weak exception guarantee by// destorying the new data, but leaving the old data in an indeterminate// state. Note that that indeterminate state will be valid, since the// old data has not been destroyed; it has merely been the source of a// move, which is required to leave the source in a valid state.// wrappersvoid M_relocate(T* newB) {relocate_move(newB, impl_.b_, impl_.e_);relocate_done(newB, impl_.b_, impl_.e_);}// dispatch type traittypedef std::integral_constant<bool,folly::IsRelocatable<T>::value && usingStdAllocator::value> relocate_use_memcpy;typedef std::integral_constant<bool,(std::is_nothrow_move_constructible<T>::value&& usingStdAllocator::value)|| !std::is_copy_constructible<T>::value> relocate_use_move;// movevoid relocate_move(T* dest, T* first, T* last) {relocate_move_or_memcpy(dest, first, last, relocate_use_memcpy());}void relocate_move_or_memcpy(T* dest, T* first, T* last, std::true_type) {std::memcpy(dest, first, (last - first) * sizeof(T));}void relocate_move_or_memcpy(T* dest, T* first, T* last, std::false_type) {relocate_move_or_copy(dest, first, last, relocate_use_move());}void relocate_move_or_copy(T* dest, T* first, T* last, std::true_type) {D_uninitialized_move_a(dest, first, last);}void relocate_move_or_copy(T* dest, T* first, T* last, std::false_type) {D_uninitialized_copy_a(dest, first, last);}// donevoid relocate_done(T* dest, T* first, T* last) noexcept {if (folly::IsRelocatable<T>::value && usingStdAllocator::value) {// used memcpy; data has been relocated, do not call destructor} else {D_destroy_range_a(first, last);}}// undovoid relocate_undo(T* dest, T* first, T* last) noexcept {if (folly::IsRelocatable<T>::value && usingStdAllocator::value) {// used memcpy, old data is still valid, nothing to do} else if (std::is_nothrow_move_constructible<T>::value &&usingStdAllocator::value) {// noexcept move everything back, aka relocate_moverelocate_move(first, dest, dest + (last - first));} else if (!std::is_copy_constructible<T>::value) {// weak guaranteeD_destroy_range_a(dest, dest + (last - first));} else {// used copy, old data is still validD_destroy_range_a(dest, dest + (last - first));}}//===========================================================================//---------------------------------------------------------------------------// construct/copy/destroypublic:fbvector() = default;explicit fbvector(const Allocator& a) : impl_(a) {}explicit fbvector(size_type n, const Allocator& a = Allocator()): impl_(n, a){ M_uninitialized_fill_n_e(n); }fbvector(size_type n, VT value, const Allocator& a = Allocator()): impl_(n, a){ M_uninitialized_fill_n_e(n, value); }template <class It, class Category = typenamestd::iterator_traits<It>::iterator_category>fbvector(It first, It last, const Allocator& a = Allocator()): fbvector(first, last, a, Category()) {}fbvector(const fbvector& other): impl_(other.size(), A::select_on_container_copy_construction(other.impl_)){ M_uninitialized_copy_e(other.begin(), other.end()); }fbvector(fbvector&& other) noexcept : impl_(std::move(other.impl_)) {}fbvector(const fbvector& other, const Allocator& a): fbvector(other.begin(), other.end(), a) {}fbvector(fbvector&& other, const Allocator& a) : impl_(a) {if (impl_ == other.impl_) {impl_.swapData(other.impl_);} else {impl_.init(other.size());M_uninitialized_move_e(other.begin(), other.end());}}fbvector(std::initializer_list<T> il, const Allocator& a = Allocator()): fbvector(il.begin(), il.end(), a) {}~fbvector() = default; // the cleanup occurs in impl_fbvector& operator=(const fbvector& other) {if (UNLIKELY(this == &other)) return *this;if (!usingStdAllocator::value &&A::propagate_on_container_copy_assignment::value) {if (impl_ != other.impl_) {// can't use other's different allocator to clean up selfimpl_.reset();}(Allocator&)impl_ = (Allocator&)other.impl_;}assign(other.begin(), other.end());return *this;}fbvector& operator=(fbvector&& other) {if (UNLIKELY(this == &other)) return *this;moveFrom(std::move(other), moveIsSwap());return *this;}fbvector& operator=(std::initializer_list<T> il) {assign(il.begin(), il.end());return *this;}template <class It, class Category = typenamestd::iterator_traits<It>::iterator_category>void assign(It first, It last) {assign(first, last, Category());}void assign(size_type n, VT value) {if (n > capacity()) {// Not enough space. Do not reserve in place, since we will// discard the old values anyways.if (dataIsInternalAndNotVT(value)) {T copy(std::move(value));impl_.reset(n);M_uninitialized_fill_n_e(n, copy);} else {impl_.reset(n);M_uninitialized_fill_n_e(n, value);}} else if (n <= size()) {auto newE = impl_.b_ + n;std::fill(impl_.b_, newE, value);M_destroy_range_e(newE);} else {std::fill(impl_.b_, impl_.e_, value);M_uninitialized_fill_n_e(n - size(), value);}}void assign(std::initializer_list<T> il) {assign(il.begin(), il.end());}allocator_type get_allocator() const noexcept {return impl_;}private:// contract dispatch for iterator types fbvector(It first, It last)template <class ForwardIterator>fbvector(ForwardIterator first, ForwardIterator last,const Allocator& a, std::forward_iterator_tag): impl_(std::distance(first, last), a){ M_uninitialized_copy_e(first, last); }template <class InputIterator>fbvector(InputIterator first, InputIterator last,const Allocator& a, std::input_iterator_tag): impl_(a){ for (; first != last; ++first) emplace_back(*first); }// contract dispatch for allocator movement in operator=(fbvector&&)voidmoveFrom(fbvector&& other, std::true_type) {swap(impl_, other.impl_);}void moveFrom(fbvector&& other, std::false_type) {if (impl_ == other.impl_) {impl_.swapData(other.impl_);} else {impl_.reset(other.size());M_uninitialized_move_e(other.begin(), other.end());}}// contract dispatch for iterator types in assign(It first, It last)template <class ForwardIterator>void assign(ForwardIterator first, ForwardIterator last,std::forward_iterator_tag) {auto const newSize = std::distance(first, last);if (newSize > capacity()) {impl_.reset(newSize);M_uninitialized_copy_e(first, last);} else if (newSize <= size()) {auto newEnd = std::copy(first, last, impl_.b_);M_destroy_range_e(newEnd);} else {auto mid = S_copy_n(impl_.b_, first, size());M_uninitialized_copy_e<decltype(last)>(mid, last);}}template <class InputIterator>void assign(InputIterator first, InputIterator last,std::input_iterator_tag) {auto p = impl_.b_;for (; first != last && p != impl_.e_; ++first, ++p) {*p = *first;}if (p != impl_.e_) {M_destroy_range_e(p);} else {for (; first != last; ++first) emplace_back(*first);}}// contract dispatch for aliasing under VT optimizationbool dataIsInternalAndNotVT(const T& t) {if (should_pass_by_value::value) return false;return dataIsInternal(t);}bool dataIsInternal(const T& t) {return UNLIKELY(impl_.b_ <= std::addressof(t) &&std::addressof(t) < impl_.e_);}//===========================================================================//---------------------------------------------------------------------------// iteratorspublic:iterator begin() noexcept {return impl_.b_;}const_iterator begin() const noexcept {return impl_.b_;}iterator end() noexcept {return impl_.e_;}const_iterator end() const noexcept {return impl_.e_;}reverse_iterator rbegin() noexcept {return reverse_iterator(end());}const_reverse_iterator rbegin() const noexcept {return const_reverse_iterator(end());}reverse_iterator rend() noexcept {return reverse_iterator(begin());}const_reverse_iterator rend() const noexcept {return const_reverse_iterator(begin());}const_iterator cbegin() const noexcept {return impl_.b_;}const_iterator cend() const noexcept {return impl_.e_;}const_reverse_iterator crbegin() const noexcept {return const_reverse_iterator(end());}const_reverse_iterator crend() const noexcept {return const_reverse_iterator(begin());}//===========================================================================//---------------------------------------------------------------------------// capacitypublic:size_type size() const noexcept {return impl_.e_ - impl_.b_;}size_type max_size() const noexcept {// good luck gettin' therereturn ~size_type(0);}void resize(size_type n) {if (n <= size()) {M_destroy_range_e(impl_.b_ + n);} else {reserve(n);M_uninitialized_fill_n_e(n - size());}}void resize(size_type n, VT t) {if (n <= size()) {M_destroy_range_e(impl_.b_ + n);} else if (dataIsInternalAndNotVT(t) && n > capacity()) {T copy(t);reserve(n);M_uninitialized_fill_n_e(n - size(), copy);} else {reserve(n);M_uninitialized_fill_n_e(n - size(), t);}}size_type capacity() const noexcept {return impl_.z_ - impl_.b_;}bool empty() const noexcept {return impl_.b_ == impl_.e_;}void reserve(size_type n) {if (n <= capacity()) return;if (impl_.b_ && reserve_in_place(n)) return;auto newCap = folly::goodMallocSize(n * sizeof(T)) / sizeof(T);auto newB = M_allocate(newCap);try {M_relocate(newB);} catch (...) {M_deallocate(newB, newCap);throw;}if (impl_.b_)M_deallocate(impl_.b_, impl_.z_ - impl_.b_);impl_.z_ = newB + newCap;impl_.e_ = newB + (impl_.e_ - impl_.b_);impl_.b_ = newB;}void shrink_to_fit() noexcept {auto const newCapacityBytes = folly::goodMallocSize(size() * sizeof(T));auto const newCap = newCapacityBytes / sizeof(T);auto const oldCap = capacity();if (newCap >= oldCap) return;void* p = impl_.b_;if ((rallocm && usingStdAllocator::value) &&newCapacityBytes >= folly::jemallocMinInPlaceExpandable &&rallocm(&p, nullptr, newCapacityBytes, 0, ALLOCM_NO_MOVE)== ALLOCM_SUCCESS) {impl_.z_ += newCap - oldCap;} else {T* newB; // intentionally uninitializedtry {newB = M_allocate(newCap);try {M_relocate(newB);} catch (...) {M_deallocate(newB, newCap);return; // swallow the error}} catch (...) {return;}if (impl_.b_)M_deallocate(impl_.b_, impl_.z_ - impl_.b_);impl_.z_ = newB + newCap;impl_.e_ = newB + (impl_.e_ - impl_.b_);impl_.b_ = newB;}}private:bool reserve_in_place(size_type n) {if (!usingStdAllocator::value || !rallocm) return false;// jemalloc can never grow in place blocks smaller than 4096 bytes.if ((impl_.z_ - impl_.b_) * sizeof(T) <folly::jemallocMinInPlaceExpandable) return false;auto const newCapacityBytes = folly::goodMallocSize(n * sizeof(T));void* p = impl_.b_;if (rallocm(&p, nullptr, newCapacityBytes, 0, ALLOCM_NO_MOVE)== ALLOCM_SUCCESS) {impl_.z_ = impl_.b_ + newCapacityBytes / sizeof(T);return true;}return false;}//===========================================================================//---------------------------------------------------------------------------// element accesspublic:reference operator[](size_type n) {assert(n < size());return impl_.b_[n];}const_reference operator[](size_type n) const {assert(n < size());return impl_.b_[n];}const_reference at(size_type n) const {if (UNLIKELY(n >= size())) {throw std::out_of_range("fbvector: index is greater than size.");}return (*this)[n];}reference at(size_type n) {auto const& cThis = *this;return const_cast<reference>(cThis.at(n));}reference front() {assert(!empty());return *impl_.b_;}const_reference front() const {assert(!empty());return *impl_.b_;}reference back() {assert(!empty());return impl_.e_[-1];}const_reference back() const {assert(!empty());return impl_.e_[-1];}//===========================================================================//---------------------------------------------------------------------------// data accesspublic:T* data() noexcept {return impl_.b_;}const T* data() const noexcept {return impl_.b_;}//===========================================================================//---------------------------------------------------------------------------// modifiers (common)public:template <class... Args>void emplace_back(Args&&... args) {if (impl_.e_ != impl_.z_) {M_construct(impl_.e_, std::forward<Args>(args)...);++impl_.e_;} else {emplace_back_aux(std::forward<Args>(args)...);}}voidpush_back(const T& value) {if (impl_.e_ != impl_.z_) {M_construct(impl_.e_, value);++impl_.e_;} else {emplace_back_aux(value);}}voidpush_back(T&& value) {if (impl_.e_ != impl_.z_) {M_construct(impl_.e_, std::move(value));++impl_.e_;} else {emplace_back_aux(std::move(value));}}void pop_back() {assert(!empty());--impl_.e_;M_destroy(impl_.e_);}void swap(fbvector& other) noexcept {if (!usingStdAllocator::value &&A::propagate_on_container_swap::value)swap(impl_, other.impl_);else impl_.swapData(other.impl_);}void clear() noexcept {M_destroy_range_e(impl_.b_);}private:// std::vector implements a similar function with a different growth// strategy: empty() ? 1 : capacity() * 2.//// fbvector grows differently on two counts://// (1) initial size// Instead of grwoing to size 1 from empty, and fbvector allocates at// least 64 bytes. You may still use reserve to reserve a lesser amount// of memory.// (2) 1.5x// For medium-sized vectors, the growth strategy is 1.5x. See the docs// for details.// This does not apply to very small or very large fbvectors. This is a// heuristic.// A nice addition to fbvector would be the capability of having a user-// defined growth strategy, probably as part of the allocator.//size_type computePushBackCapacity() const {return empty() ? std::max(64 / sizeof(T), size_type(1)): capacity() < folly::jemallocMinInPlaceExpandable / sizeof(T)? capacity() * 2: sizeof(T) > folly::jemallocMinInPlaceExpandable / 2 && capacity() == 1? 2: capacity() > 4096 * 32 / sizeof(T)? capacity() * 2: (capacity() * 3 + 1) / 2;}template <class... Args>void emplace_back_aux(Args&&... args);//===========================================================================//---------------------------------------------------------------------------// modifiers (erase)public:iterator erase(const_iterator position) {return erase(position, position + 1);}iterator erase(const_iterator first, const_iterator last) {assert(isValid(first) && isValid(last));assert(first <= last);if (first != last) {if (last == end()) {M_destroy_range_e((iterator)first);} else {if (folly::IsRelocatable<T>::value && usingStdAllocator::value) {D_destroy_range_a((iterator)first, (iterator)last);if (last - first >= cend() - last) {std::memcpy((iterator)first, last, (cend() - last) * sizeof(T));} else {std::memmove((iterator)first, last, (cend() - last) * sizeof(T));}impl_.e_ -= (last - first);} else {std::copy(std::make_move_iterator((iterator)last),std::make_move_iterator(end()), (iterator)first);auto newEnd = impl_.e_ - std::distance(first, last);M_destroy_range_e(newEnd);}}}return (iterator)first;}//===========================================================================//---------------------------------------------------------------------------// modifiers (insert)private: // we have the private section first because it defines some macrosbool isValid(const_iterator it) {return cbegin() <= it && it <= cend();}size_type computeInsertCapacity(size_type n) {size_type nc = std::max(computePushBackCapacity(), size() + n);size_type ac = folly::goodMallocSize(nc * sizeof(T)) / sizeof(T);return ac;}//---------------------------------------------------------------------------//// make_window takes an fbvector, and creates an uninitialized gap (a// window) at the given position, of the given size. The fbvector must// have enough capacity.//// Explanation by picture.//// 123456789______// ^// make_window here of size 3//// 1234___56789___//// If something goes wrong and the window must be destroyed, use// undo_window to provide a weak exception guarantee. It destroys// the right ledge.//// 1234___________////---------------------------------------------------------------------------//// wrap_frame takes an inverse window and relocates an fbvector around it.// The fbvector must have at least as many elements as the left ledge.//// Explanation by picture.//// START// fbvector: inverse window:// 123456789______ _____abcde_______// [idx][ n ]//// RESULT// _______________ 12345abcde6789___////---------------------------------------------------------------------------//// insert_use_fresh_memory returns true iff the fbvector should use a fresh// block of memory for the insertion. If the fbvector does not have enough// spare capacity, then it must return true. Otherwise either true or false// may be returned.////---------------------------------------------------------------------------//// These three functions, make_window, wrap_frame, and// insert_use_fresh_memory, can be combined into a uniform interface.// Since that interface involves a lot of case-work, it is built into// some macros: FOLLY_FBVECTOR_INSERT_(START|TRY|END)// Macros are used in an attempt to let GCC perform better optimizations,// especially control flow optimization.////---------------------------------------------------------------------------// windowvoid make_window(iterator position, size_type n) {assert(isValid(position));assert(size() + n <= capacity());assert(n != 0);auto tail = std::distance(position, impl_.e_);if (tail <= n) {relocate_move(position + n, position, impl_.e_);relocate_done(position + n, position, impl_.e_);impl_.e_ += n;} else {if (folly::IsRelocatable<T>::value && usingStdAllocator::value) {std::memmove(position + n, position, tail * sizeof(T));impl_.e_ += n;} else {D_uninitialized_move_a(impl_.e_, impl_.e_ - n, impl_.e_);try {std::copy_backward(std::make_move_iterator(position),std::make_move_iterator(impl_.e_ - n), impl_.e_);} catch (...) {D_destroy_range_a(impl_.e_ - n, impl_.e_ + n);impl_.e_ -= n;throw;}impl_.e_ += n;D_destroy_range_a(position, position + n);}}}void undo_window(iterator position, size_type n) noexcept {D_destroy_range_a(position + n, impl_.e_);impl_.e_ = position;}//---------------------------------------------------------------------------// framevoid wrap_frame(T* ledge, size_type idx, size_type n) {assert(size() >= idx);assert(n != 0);relocate_move(ledge, impl_.b_, impl_.b_ + idx);try {relocate_move(ledge + idx + n, impl_.b_ + idx, impl_.e_);} catch (...) {relocate_undo(ledge, impl_.b_, impl_.b_ + idx);throw;}relocate_done(ledge, impl_.b_, impl_.b_ + idx);relocate_done(ledge + idx + n, impl_.b_ + idx, impl_.e_);}//---------------------------------------------------------------------------// use fresh?bool insert_use_fresh(const_iterator cposition, size_type n) {if (cposition == cend()) {if (size() + n <= capacity()) return false;if (reserve_in_place(size() + n)) return false;return true;}if (size() + n > capacity()) return true;return false;}//---------------------------------------------------------------------------// interface#define FOLLY_FBVECTOR_INSERT_START(cpos, n) \assert(isValid(cpos)); \T* position = const_cast<T*>(cpos); \size_type idx = std::distance(impl_.b_, position); \bool fresh = insert_use_fresh(position, n); \T* b; \size_type newCap = 0; \\if (fresh) { \newCap = computeInsertCapacity(n); \b = M_allocate(newCap); \} else { \make_window(position, n); \b = impl_.b_; \} \\T* start = b + idx; \\try { \// construct the inserted elements#define FOLLY_FBVECTOR_INSERT_TRY(cpos, n) \} catch (...) { \if (fresh) { \M_deallocate(b, newCap); \} else { \undo_window(position, n); \} \throw; \} \\if (fresh) { \try { \wrap_frame(b, idx, n); \} catch (...) { \// delete the inserted elements (exception has been thrown)#define FOLLY_FBVECTOR_INSERT_END(cpos, n) \M_deallocate(b, newCap); \throw; \} \if (impl_.b_) M_deallocate(impl_.b_, capacity()); \impl_.set(b, size() + n, newCap); \return impl_.b_ + idx; \} else { \return position; \} \//---------------------------------------------------------------------------// insert functionspublic:template <class... Args>iterator emplace(const_iterator cpos, Args&&... args) {FOLLY_FBVECTOR_INSERT_START(cpos, 1)M_construct(start, std::forward<Args>(args)...);FOLLY_FBVECTOR_INSERT_TRY(cpos, 1)M_destroy(start);FOLLY_FBVECTOR_INSERT_END(cpos, 1)}iterator insert(const_iterator cpos, const T& value) {if (dataIsInternal(value)) return insert(cpos, T(value));FOLLY_FBVECTOR_INSERT_START(cpos, 1)M_construct(start, value);FOLLY_FBVECTOR_INSERT_TRY(cpos, 1)M_destroy(start);FOLLY_FBVECTOR_INSERT_END(cpos, 1)}iterator insert(const_iterator cpos, T&& value) {if (dataIsInternal(value)) return insert(cpos, T(std::move(value)));FOLLY_FBVECTOR_INSERT_START(cpos, 1)M_construct(start, std::move(value));FOLLY_FBVECTOR_INSERT_TRY(cpos, 1)M_destroy(start);FOLLY_FBVECTOR_INSERT_END(cpos, 1)}iterator insert(const_iterator cpos, size_type n, VT value) {if (n == 0) return (iterator)cpos;if (dataIsInternalAndNotVT(value)) return insert(cpos, n, T(value));FOLLY_FBVECTOR_INSERT_START(cpos, n)D_uninitialized_fill_n_a(start, n, value);FOLLY_FBVECTOR_INSERT_TRY(cpos, n)D_destroy_range_a(start, start + n);FOLLY_FBVECTOR_INSERT_END(cpos, n)}template <class It, class Category = typenamestd::iterator_traits<It>::iterator_category>iterator insert(const_iterator cpos, It first, It last) {return insert(cpos, first, last, Category());}iterator insert(const_iterator cpos, std::initializer_list<T> il) {return insert(cpos, il.begin(), il.end());}//---------------------------------------------------------------------------// insert dispatch for iterator typesprivate:template <class FIt>iterator insert(const_iterator cpos, FIt first, FIt last,std::forward_iterator_tag) {size_type n = std::distance(first, last);if (n == 0) return (iterator)cpos;FOLLY_FBVECTOR_INSERT_START(cpos, n)D_uninitialized_copy_a(start, first, last);FOLLY_FBVECTOR_INSERT_TRY(cpos, n)D_destroy_range_a(start, start + n);FOLLY_FBVECTOR_INSERT_END(cpos, n)}template <class IIt>iterator insert(const_iterator cpos, IIt first, IIt last,std::input_iterator_tag) {T* position = const_cast<T*>(cpos);assert(isValid(position));size_type idx = std::distance(begin(), position);fbvector storage(std::make_move_iterator(position),std::make_move_iterator(end()),A::select_on_container_copy_construction(impl_));M_destroy_range_e(position);for (; first != last; ++first) emplace_back(*first);insert(cend(), std::make_move_iterator(storage.begin()),std::make_move_iterator(storage.end()));return impl_.b_ + idx;}//===========================================================================//---------------------------------------------------------------------------// lexicographical functions (others from boost::totally_ordered superclass)public:bool operator==(const fbvector& other) const {return size() == other.size() && std::equal(begin(), end(), other.begin());}bool operator<(const fbvector& other) const {return std::lexicographical_compare(begin(), end(), other.begin(), other.end());}//===========================================================================//---------------------------------------------------------------------------// friendsprivate:template <class _T, class _A>friend _T* relinquish(fbvector<_T, _A>&);template <class _T, class _A>friend void attach(fbvector<_T, _A>&, _T* data, size_t sz, size_t cap);}; // class fbvector//=============================================================================//-----------------------------------------------------------------------------// outlined functions (gcc, you finicky compiler you)template <typename T, typename Allocator>template <class... Args>void fbvector<T, Allocator>::emplace_back_aux(Args&&... args) {size_type byte_sz = folly::goodMallocSize(computePushBackCapacity() * sizeof(T));if (usingStdAllocator::value&& rallocm&& ((impl_.z_ - impl_.b_) * sizeof(T) >=folly::jemallocMinInPlaceExpandable)) {// Try to reserve in place.// Ask rallocm to allocate in place at least size()+1 and at most sz space.// rallocm will allocate as much as possible within that range, which// is the best possible outcome: if sz space is available, take it all,// otherwise take as much as possible. If nothing is available, then fail.// In this fashion, we never relocate if there is a possibility of// expanding in place, and we never relocate by less than the desired// amount unless we cannot expand further. Hence we will not relocate// sub-optimally twice in a row (modulo the blocking memory being freed).size_type lower = folly::goodMallocSize(sizeof(T) + size() * sizeof(T));size_type upper = byte_sz;size_type extra = upper - lower;void* p = impl_.b_;size_t actual;if (rallocm(&p, &actual, lower, extra, ALLOCM_NO_MOVE)== ALLOCM_SUCCESS) {impl_.z_ = impl_.b_ + actual / sizeof(T);M_construct(impl_.e_, std::forward<Args>(args)...);++impl_.e_;return;}}// Reallocation failed. Perform a manual relocation.size_type sz = byte_sz / sizeof(T);auto newB = M_allocate(sz);auto newE = newB + size();try {if (folly::IsRelocatable<T>::value && usingStdAllocator::value) {// For linear memory access, relocate before construction.// By the test condition, relocate is noexcept.// Note that there is no cleanup to do if M_construct throws - that's// one of the beauties of relocation.// Benchmarks for this code have high variance, and seem to be close.relocate_move(newB, impl_.b_, impl_.e_);M_construct(newE, std::forward<Args>(args)...);++newE;} else {M_construct(newE, std::forward<Args>(args)...);++newE;try {M_relocate(newB);} catch (...) {M_destroy(newE - 1);throw;}}} catch (...) {M_deallocate(newB, sz);throw;}if (impl_.b_) M_deallocate(impl_.b_, size());impl_.b_ = newB;impl_.e_ = newE;impl_.z_ = newB + sz;}//=============================================================================//-----------------------------------------------------------------------------// specialized functionstemplate <class T, class A>void swap(fbvector<T, A>& lhs, fbvector<T, A>& rhs) noexcept {lhs.swap(rhs);}//=============================================================================//-----------------------------------------------------------------------------// othertemplate <class T, class A>void compactResize(fbvector<T, A>* v, size_t sz) {v->resize(sz);v->shrink_to_fit();}// DANGER//// relinquish and attach are not a members function specifically so that it is// awkward to call them. It is very easy to shoot yourself in the foot with// these functions.//// If you call relinquish, then it is your responsibility to free the data// and the storage, both of which may have been generated in a non-standard// way through the fbvector's allocator.//// If you call attach, it is your responsibility to ensure that the fbvector// is fresh (size and capacity both zero), and that the supplied data is// capable of being manipulated by the allocator.// It is acceptable to supply a stack pointer IF:// (1) The vector's data does not outlive the stack pointer. This includes// extension of the data's life through a move operation.// (2) The pointer has enough capacity that the vector will never be// relocated.// (3) Insert is not called on the vector; these functions have leeway to// relocate the vector even if there is enough capacity.// (4) A stack pointer is compatible with the fbvector's allocator.//template <class T, class A>T* relinquish(fbvector<T, A>& v) {T* ret = v.data();v.impl_.b_ = v.impl_.e_ = v.impl_.z_ = nullptr;return ret;}template <class T, class A>void attach(fbvector<T, A>& v, T* data, size_t sz, size_t cap) {assert(v.data() == nullptr);v.impl_.b_ = data;v.impl_.e_ = data + sz;v.impl_.z_ = data + cap;}} // namespace folly#endif // FOLLY_FBVECTOR_H


0 0