Data Structures
Custom Data Structures
Custom Data Structures
Union Find Disjoint Set with path compression only.
This implementation 'ranks' using size of group and will still guarantee a maximum tree height of log N by always merging a smaller group into a larger group. (Proof using Heavy Light Decomposition)
Policy Based Data Structures (PBDS) are non-standard data structures bundled with most GNU C++ compilers.
Very little documentation is found about these elusive data structures.
Do note that they do not work with all compilers. Do test them on the judge before writing code using these data structures.
Basically a STL set or map with two extra functions below:
find_by_order(x)
returns an iterator pointing to the xth element in the set (0-indexed).
order_of_key(k)
gives you the order of the key k
as if it was inserted into the set (0-indexed).
Time complexity of the above operations are undefined. Benchmark shows that it is on average between log N and log2 N, where N is the number of elements in the data structure.
Note: On older compilers, we have to use null_mapped_type
instead of null_type
.
Under construction!
Point update, range sum query
Range update, point query (Store the 'deltas')
Using 2 fenwicks to simulate range update range query in log N
Fenwicks can be used to solve Range Min/Max Query only of the queries are in the form: What is the min/max of the array from index 1 to X? Furthermore, updates can only be incrementing for Range Maximum Query, meaning that you cannot update node 2 with value 7 and then update it with value 5 again. For Range Minimum Query, updates must be decrementing, meaning that you cannot update node 2 with value 6 and then update it with value 8 again.
With those conditions satisfied, simple changes to the normal fenwick code is required to make fenwick work for Range Min/Max Query
Point update in 2D space, range sum in 2D space
Range min/max/sum query
Range add/set with lazy propagation
Supports lazy node creation. Memory used: minimum of O(R log R) or O(U log R), where R is the range of segment tree and U is the number of updates. Number of queries does not affect memory complexity.
Time Complexity: O(N) for array construction, guaranteed O(log R) per query or update.
Shorter version of the above segment tree using macros.
Alternative code from Hubert's NOI Reference
Note that s
and e
are inclusive and the segment tree does not implement lazy update (set
and add
functions are O(N))
Supports fast range add and query by lazy propagating the changes.
The below code involves bounding the range to a maximum number A[i] = max(A[i], v)
via updates.
Query is a single value of the provided index.
Supports aribitary large ranges by creating segment tree nodes lazily (defaults to 0).
The below code has lazy propagation as well so range add is O(log R), where R is the size of the range (negligible)
The below is the code for IOI 2013 Game (GCD) which obtains full score.
Range min query on static array
Time Complexity: O(N log N) for construction, guaranteed O(1) per query.
Allows one to find the Dth ancestor of the Xth node in O(log N) time per query.
Allows one to find the Lowest Common Ancestor of 2 numbers in O(log N) using sparse table pre-computation.
par[x]
stores the parent of node x. The root is -1. (This code assumes that the tree is all connected)
num_child[x]
stores the number of children the node's subtree has (including itself)
height[x]
stores the height of (or distance to) node x from the root.
The code will split the tree into multiple linear paths or segments. segcount
contains the number of linear paths. These linear paths are labelled from 1 to segcount
and segitem[s]
contains the list of nodes in the linear path labelled s
, ordered in increasing height from root.
segroot[s]
stores the parent of the first node in segitem[s]
, this is essentially where the path 'originates' from.
segdepth[s]
stores the number of light edges any of the nodes in path s (segitem[s]
) are required to pass through to reach the root. For operations between paths on tree, this also means the number of additional linear paths you are required to update.
The code below solves the Lowest Common Ancestor problem using heavy light decomposition.